ENASE 2020 Abstracts


Area 1 - Service Science and Business Information Systems

Full Papers
Paper Nr: 29
Title:

A Graph-based Approach for Process Robustness in Unreliable Communication Environments

Authors:

Frank Nordemann, Ralf Tönjes, Elke Pulvermüller and Heiko Tapken

Abstract: The Business Process Model and Notation (BPMN) is broadly used to model and execute process definitions. Many processes include different participants and require reliable communication to operate properly. However, BPMN is used in a growing number of use cases taking place in unreliable communication environments. Intermittent or broken connectivity potentially interrupts or breaks down process operation. Methods for the verification of process robustness are missing. This paper presents a graph-based approach to automatically identify robust process path configurations. Using process-to-graph transition rules and robustness metrics, graph-based search algorithms allow to find robust process paths and to rate their level of robustness. Process examples show that well-known shortest-path algorithms not necessarily identify the most appropriate path. Comparing all paths using metrics for the path robustness level and robustness probability is a promising choice for most scenarios. Inspired by maximum-flow algorithms, a combined-path analysis may optimize robustness by combining process paths based on different communication technologies.

Paper Nr: 36
Title:

A Methodology for Determination of Performance Measures Thresholds for Business Process

Authors:

Mariem Kchaou, Wiem Khlif and Faiez Gargouri

Abstract: Business process performance is vital for organizations which aim to produce a high performance model. In the literature, performance of the business process can be evaluated through formal verifications, simulation, or a set of measures. In this paper, we adopt measures-based assessment to evaluate the performance of business process models, modelled with Business Process Modelling and Notation (BPMN), in terms of the characteristics related to BPMN elements (i.e. time behaviour, cost ) and characteristics related to the actor (ie. availability, suitability and its cost). We propose a methodology based on fuzzy logic which apply performance measures to assess these characteristic’s levels. In addition, it expresses the problem of defining threshold based on a set of BPMN models2s. Furthermore, our methodology evaluates the performance of business process models based on fuzzy logic. The efficiency of the proposed methodology is illustrated through a case study and a tool that fully support the developed system.

Paper Nr: 38
Title:

Cloud Services Discovery and Selection Assistant

Authors:

Hamdi Gabsi, Rim Drira and Henda H. Ben Ghezala

Abstract: The surging popularity of cloud services has led to the emergence of numerous cloud providers who offer various services. The great variety and the exponential proliferation of cloud services over the Web introduce several functionally similar offers with heterogeneous descriptions and contrasting APIs (Application Programming Interfaces). Due to this heterogeneity, efficient and accurate service discovery and selection, based on developers-specific requirements and terminology, have become a significant challenge that requires a high level of expertise and a steep documentation curve. In order to assist developers in handling these issues, first, we propose a Cloud Services Discovery and Selection Assistant (DESCA) based on a developer’s query expressed in natural language. Second, we offer to the developers a cloud data-set, named ULID (Unified cLoud servIces Data-set), where services offered by different cloud providers are collected, unified and classified based on their functional features. The effectiveness of our contributions and their valuable insights to improve cloud services discovery and selection are demonstrated through evaluation experimentation.

Paper Nr: 61
Title:

An Approach for Deriving, Analyzing, and Organizing Requirements Metrics and Related Information in Systems Projects

Authors:

Ibtehal Noorwali, Nazim H. Madhavji and Remo Ferrari

Abstract: Large systems projects present unique challenges to the requirements measurement process: large sets of requirements across many sub-projects, requirements existing in different categories (e.g., hardware, interface, software, etc.), varying requirements meta-data items (e.g., ID, requirement type, priority, etc.), to name few. Consequently, requirements metrics are often incomplete, metrics and measurement reports are often unorganized, and meta-data items that are essential for applying the metrics are often incomplete or missing. To our knowledge, there are no published approaches for measuring requirements in large systems projects. In this paper, we propose a 7-step approach that combines the use of the goal-question-metric paradigm (GQM) and the identification and analysis of four main RE measurement elements: attributes, levels, metrics, and meta-data items—that aids in the derivation, analysis, and organization of requirements metrics. We illustrate the use of our approach by applying it to real-life data from the rail automation systems domain. We show how the approach led to a more comprehensive set of requirements metrics, improved organization and reporting of metrics, and improved consistency and completeness of requirements meta-data across projects.

Paper Nr: 64
Title:

Scenario-based Evolvability Analysis of Service-oriented Systems: A Lightweight and Tool-supported Method

Authors:

Justus Bogner, Stefan Wagner and Alfred Zimmermann

Abstract: Scenario-based analysis is a comprehensive technique to evaluate software quality and can provide more detailed insights than e.g. maintainability metrics. Since such methods typically require significant manual effort, we designed a lightweight scenario-based evolvability evaluation method. To increase efficiency and to limit assumptions, the method exclusively targets service- and microservice-based systems. Additionally, we implemented web-based tool support for each step. Method and tool were also evaluated with a survey (N=40) that focused on change effort estimation techniques and hands-on interviews (N=7) that focused on usability. Based on the evaluation results, we improved method and tool support further. To increase reuse and transparency, the web-based application as well as all survey and interview artifacts are publicly available on GitHub. In its current state, the tool-supported method is ready for first industry case studies.

Short Papers
Paper Nr: 2
Title:

Updating Ontology Alignment on the Relation Level based on Ontology Evolution

Authors:

Adrianna Kozierkiewicz and Marcin Pietranik

Abstract: Ontologies are becoming a popular and convenient way for knowledge representation - they can store information about objects and relations between them. However, nothing is constant and new information may appear, therefore those alterations should be reflected both in an ontology as well as in alignment between two ontologies if the knowledge about some domain is distributed in many sources. In the literature, it is possible to find approaches devoted to tracking changes in ontologies, but tools for updating ontology alignment are limited, especially devoted to the level of relations. It became a motivation for this work, thus, the aim of this paper is split into two parts. Firstly, we will present a criterion that tells us that modification in an ontology on the relation level is significant, and how it influences the maintained alignment (also on the relation level). Next, an algorithm for simple revalidation of existing mappings is proposed.

Paper Nr: 10
Title:

A Conceptual Method for Eliciting Trust-related Software Features for Computer-mediated Introduction

Authors:

Angela Borchert, Nicolás D. Ferreyra and Maritta Heisel

Abstract: Computer-Mediated Introduction (CMI) describes the process in which individuals with compatible intentions get to know each other through social media platforms to eventually meet afterwards in the physical world (i.e. sharing economy and online dating). This process involves risks such as data misuse, self-esteem damage, fraud or violence. Therefore, it is important to assess the trustworthiness of other users before interacting with or meeting them. In order to support users in that process and, thereby, reducing risks associated with CMI use, previous work has come up with the approach to develop CMI platforms, which consider users’ trust concerns regarding other users by software features addressing those. In line with that approach, we have developed a conceptual method for requirements engineers to systematically elicit trust-related software features for a safer, user-centred CMI. The method not only considers trust concerns, but also workarounds, trustworthiness facets and trustworthiness goals to derive requirements as a basis for appropriate trust-related software features. In this way, the method facilitates the development of application-specific software, which we illustratively show in an example for the online dating app Plenty of Fish.

Paper Nr: 19
Title:

Evolution Style Mining in Software Architecture

Authors:

Kadidiatou Djibo, Mourad Oussalah and Jacqueline Konate

Abstract: Sequential pattern extraction techniques are applied to the evolution styles of an evolving software architecture in order to plan and predict future evolution paths for the architecture. We present in this paper, a formalism to express the evolution styles in a more practical way. Then, we analyze these collected styles from the formalism introduced by the techniques of sequential patterns extraction to discover the sequential patterns of software architecture evolution. Finaly, from the analysis results, we develop a learning base and prediction rules to predict future evolution paths.

Paper Nr: 21
Title:

Adding Temporal Dimension to Ontology Learning Models for Depression Signs Detection from Social Media Texts

Authors:

Patricia Martin-Rodilla

Abstract: Approaches to early detection of depression based on individual’s language are receiving increasing attention, with detection software systems based on lexical, grammatical or discursive components applied to medical corpus or social media texts. However, these first detection systems are defragmented, each attending to a specific feature or linguistic level, and not addressing a more conceptual level. Existing ontology learning (OL) methods extract the ontology referred in the text. In addition, existing systems perform language analysis for the detection of depression as a snapshot of each individual, regardless of their temporal dimension. Is it possible that suitable linguistic features to detect early signs of depression vary over time? And the underlying ontology? This paper presents a model that adds the temporal component to current ontology learning models to perform evolutionary analysis of both linguistic and ontological features to texts from social networks. The model has been applied to an external corpus of depression from social media texts, with a two-fold goal: 1) validating the model by contrasting it with OL models without temporal component 2) producing a corpus of evolutionary OL results applied to the depression detection from social media texts.

Paper Nr: 23
Title:

Agility of Security Practices and Agile Process Models: An Evaluation of Cost for Incorporating Security in Agile Process Models

Authors:

H. M. Maqsood and Andrea Bondavalli

Abstract: Agile process models are widely used today for software development. There has been an immense increase in use of agile methodologies due to their major focus on delivering working software and accommodating changes in requirements. However, use of agile methodologies for developing secure systems still poses many challenges. This research, addresses the issue of observing the effect on agility of process models while security practices are applied in them. An approach is proposed which calculates level of agility of six agile process models (XP, Scrum, FDD, ASD, DSDM, and Crystal) and security practices against four fundamental parameters of agility. When security practices are applied to process models they lower the degree of agility. We propose a method to see this effect based on factor of agility and also that the degree of agility of process model can be adjusted at desired level by including or excluding security practices.

Paper Nr: 65
Title:

Towards a Framework for KPI Evolution

Authors:

Eladio Domínguez, Beatriz Pérez, Ángel L. Rubio and María A. Zapata

Abstract: Key Performance Indicators (KPIs) are becoming essential elements for measuring business performance. In recent years, KPIs management has been the subject to sustained interests for researchers and practitioners alike, deriving into a large research corpus of approaches addressing aspects in matters as varied as modelling, maintenance or expressiveness of KPIs. In particular, since both businesses and processes have to be adapted to ever-changing requirements, the KPIs that measure their performance must evolve accordingly. However, based on a previous review of the literature, we found that little attention has been paid to the provision of mechanisms to manage KPIs evolution. Our long-term research goal is to provide a fully proposal for supporting KPIs evolution management. In this position paper, we present the first ideas of a conceptual framework for addressing this issue, proposing a pattern-driven KPI evolution specification and a KPI evolution metamodel made up of two interconnected views. Our proposal is general enough to be applied regardless of the specific KPIs management approach being used.

Paper Nr: 82
Title:

Intelligent Luminaire based Real-time Indoor Positioning for Assisted Living

Authors:

Iuliana Marin, Maria I. Bocicor and Arthur-Jozsef Molnar

Abstract: This paper presents an experimental evaluation on the accuracy of indoor localisation. The research was carried out as part of a European Union project targeting the creation of ICT solutions for older adult care. Current expectation is that advances in technology will supplement the human workforce required for older adult care, improve their quality of life and decrease healthcare expenditure. The proposed approach is implemented in the form of a configurable cyber-physical system that enables indoor localization and monitoring of older adults living at home or in residential buildings. Hardware consists of custom developed luminaires with sensing, communication and processing capabilities. They replace the existing lighting infrastructure, do not look out of place and are cost effective. The luminaires record the strength of a Bluetooth signal emitted by a wearable device equipped by the monitored user. The system’s software server uses trilateration to calculate the person’s location based on known luminaire placement and recorded signal strengths. However, multipath fading caused by the presence of walls, furniture and other objects introduces localisation errors. Our previous experiments showed that room-level accuracy can be achieved using software-based filtering for a stationary subject. Our current objective is to assess system accuracy in the context of a moving subject, and ascertain whether room-level localization is feasible in real time.

Paper Nr: 91
Title:

GoSecure: Securing Projects with Go

Authors:

Maria Spichkova, Achal Vaish, David C. Highet, Isthi Irfan, Kendrick Kesley and Priyanga D. Kumar

Abstract: This paper presents an automated solution for security vulnerability scanning of Google Cloud Platform (GCP) projects, to cover gaps in the capabilities of solutions to scan GCP projects for common security issues. The elaborated security inspection tool, GoSecure, can scan multiple GCP instances against industry recognised Center for Internet Security (CIS) benchmarks for GCP. GoSecure covers all categories listed under the CIS benchmarks for GCP, providing an overview of the existing security profile of all GCP projects, along with suggestions for improvement in configurations for the individual projects.

Paper Nr: 93
Title:

A Formal Model-Based Testing Framework for Validating an IoT Solution for Blockchain-based Vehicles Communication

Authors:

Rateb Jabbar, Moez Krichen, Mohamed Kharbeche, Noora Fetais and Kamel Barkaoui

Abstract: The emergence of embedded and connected smart technologies, systems, and devices has enabled the concept of smart cities by connecting every “thing” to the Internet and in particular in transportation through the Internet of Vehicles (IoV). The main purpose of IoV is to prevent fatal crashes by resolving traffic and road safety problems. Nevertheless, it is paramount to ensure secure and accurate transmission and recording of data in “Vehicle-to-Vehicle” (V2V) and “Vehicle-to-Infrastructure” (V2I) communication. To improve “Vehicle-to-Everything” (V2X) communication, this work uses Blockchain technology for developing a Blockchain-based IoT system aimed at establishing secure communication and developing a fully decentralized cloud computing platform. Moreover, the authors propose a model-based framework to validate the proposed approach. This framework is mainly based on the use of the Attack Trees (AT) and timed automaton (TA) formalisms in order to test the functional, load and security aspects. An optimization phase for testers placement inspired by fog computing is proposed as well.

Paper Nr: 8
Title:

Practical Analysis of Traceability Problem in Monero’s Blockchain

Authors:

Michal Kedziora and Wojciech Wojtysiak

Abstract: This paper presents an analysis of the cryptocurrency security based on the traceability problem in the Monero Blockchain. Researchers found a weakness in Monero transactions in the beginning of the network existence, where the real input could be deduced by the elimination. We decided to do further research on the newest data available after several Monero updates where introduced and implemented to evaluate, if this weakness is still available in the recent transactions. The analysis of the existing sizes of ”Ring Signature” in transactions in subsequent versions of the Monero network has proven that the minimum required size for a given version is most often used, which can in some rare situation potentially lead to the creation of a user profile, and identifying transaction.

Paper Nr: 31
Title:

Evaluation of Scrum-based Agile Scaling Models for Causes of Scalability Challenges

Authors:

Necmettin Ozkan and Ayca K. Tarhan

Abstract: Agile Software Development (ASD) community have come up with the idea of scaling and created some models/frameworks for large-scale set-up. Despite these models, application of agile methods to large projects remains as a challenging research question for the community. The aim of this work is to evaluate these Scrum-based scaling models (including SoS, Nexus, LeSS, SAFe, Scrum at Scale, DAD, and RAGE) in terms of how and to what extent they are able to provide scaling solutions for the identified causes of challenges in the Agile Manifesto and Scrum Guide, in the context of software development and project management. The study maps the solution proposals of the models for underlying causes of challenges and justify them with three experts. To come up with an evaluation, considering the experts’ views, the auditors provide a thorough perspective on models’ solutions against to the common pain points underlying the manifesto and guide for scaling. With some exceptions, we see that all of the models try to maintain the pain points associated with scalability in the core of the ASD.

Paper Nr: 96
Title:

Crisis Management Systems: Big Data and Machine Learning Approach

Authors:

Abderrazak Boumahdi, Mahmoud El Hamlaoui and Mahmoud Nassar

Abstract: A crisis is defined as an event that, by its nature or its consequences, poses a threat to the vital national interests or the basic needs of the population, encourages rapid decision making, and requires coordination between the various departments and agencies. Hence the need and importance of crisis and disaster management systems. These crisis and disaster management systems have several phases and techniques, and require many resources and tactics and needs. Among the needs of these systems are useful and necessary information that can be used to improve the making of good decisions, such as data on past and current crises. The application of machine learning and big data technologies in data processing of crises and disasters can yield important results in this area. In this document, we address in the first part the crisis management systems, and the tools of big data and machine learning that can be used. Then in the second part, we have established a literature review that includes a state of the art, and a discussion. Then we established a machine learning and big data approach for crisis management systems, with a description and experimentation, as well as a discussion of results and future work.

Area 2 - Software Engineering

Full Papers
Paper Nr: 3
Title:

Visual Languages for Supporting Big Data Analytics Development

Authors:

Hourieh Khalajzadeh, Andrew J. Simmons, Mohamed Abdelrazek, John Grundy, John Hosking and Qiang He

Abstract: We present BiDaML (Big Data Analytics Modeling Languages), an integrated suite of visual languages and supporting tool to help end-users with the engineering of big data analytics solutions. BiDaML, our visual notations suite, comprises six diagrammatic notations: brainstorming diagram, process diagram, technique diagrams, data diagrams, output diagrams and deployment diagram. BiDaML tool provides a platform for efficiently producing BiDaML visual models and facilitating their design, creation, code generation and integration with other tools. To demonstrate the utility of BiDaML, we illustrate our approach with a real-world example of traffic data analysis. We evaluate BiDaML using two types of evaluations, the physics of notations and a cognitive walkthrough with several target end-users e.g. data scientists and software engineers.

Paper Nr: 9
Title:

CkTail: Model Learning of Communicating Systems

Authors:

Sébastien Salva and Elliott Blot

Abstract: Event logs are helpful to figure out what is happening in a system or to diagnose the causes that led to an unexpected crash or security issue. Unfortunately, their growing sizes and lacks of abstraction make them difficult to interpret, especially when a system integrates several communicating components. This paper proposes to learn models of communicating systems, e.g., Web service compositions, distributed applications, or IoT systems, from their event logs in order to help engineers understand how they are functioning and diagnose them. Our approach, called CkTail, generates one Input Output Labelled Transition System (IOLTS) for every component participating in the communications and dependency graphs illustrating another viewpoint of the system architecture. Compared to other model learning approaches, CkTail improves the precision of the generated models by better recognising sessions in event logs. Experimental results obtained from 9 case studies show the effectiveness of CkTail to recover accurate and general models along with component dependency graphs.

Paper Nr: 13
Title:

WOBCompute: Architecture and Design Considerations of a P2P Computing System

Authors:

Levente Filep

Abstract: Regarding large-scale scientific computing, many alternative solutions to Cloud Computing Services exits, which combine existing, cheap, commodity hardware into computational clusters. The majority of these, due to their ease of deployment, are based on Client-Server architecture. Decentralized approaches employ some form of Peer-to-Peer (P2P) design, however, due to their increased design complexity, and without major benefits over the Client-Server ones, none of these systems gained wide popularity. The P2P system presented in this paper features decentralized task coordination, the possibility of suspending, migrating and resuming workload on different nodes, employs remote checkpoints to allow partial result recovery, and workload tracking, which offers the possibility to initiate communication between them. Design considerations and choices for this system are presented and discussed. The chosen topology is super-peer managed clusters arranged in an extended start topology and evaluated by simulation. Such a system comes with enormous design complexity; however, a middleware can hide these complexities, while providing the applications a simple interface to access network resources. Harnessing idle computing resources, the system can be deployed on a combination of in-house computer networks, personal and volunteer devices, as well as Cloud-based VMs.

Paper Nr: 14
Title:

Pattern-driven Design of a Multiparadigm Parallel Programming Framework

Authors:

Virginia Niculescu, Frédéric Loulergue, Darius Bufnea and Adrian Sterca

Abstract: Parallel programming is more complex than sequential programming. It is therefore more difficult to achieve the same software quality in a parallel context. High-level parallel programming approaches are intermediate approaches where users are offered simplified APIs. There is a trade-off between expressivity and programming productivity, while still offering good performance. By being less error-prone, high-level approaches can improve application quality. From the API user point of view, such approaches should provide ease of programming without hindering performance. From the API implementor point of view, such approaches should be portable across parallel paradigms and extensible. JPLF is a framework for the Java language based on the theory of Powerlists, which are parallel recursive data structures. It is a high-level parallel programming approach that possesses the qualities mentioned above. This paper reflects on the design of JPLF: it explains the design choices and highlights the design patterns and design principles applied to build JPLF.

Paper Nr: 20
Title:

Framework of Software Design Patterns for Energy-Aware Embedded Systems

Authors:

Marco Schaarschmidt, Michael Uelschen, Elke Pulvermüller and Clemens Westerkamp

Abstract: With the increasing size and complexity of embedded systems, the impact of software on energy consumption is becoming more important. Previous research focused mainly on energy optimization at the hardware level. However, little research has been carried out regarding energy optimization at the software design level. This paper focuses on the software design level and addresses the gap between software and hardware design for embedded systems. This is achieved by proposing a framework for software design patterns, which takes aspects of power consumption and time behavior of the hardware level into account. We evaluate the expressiveness of the framework by applying it to well-known and novel design patterns. Furthermore, we introduce a dimensionless numerical efficiency factor to make possible energy savings quantifiable.

Paper Nr: 24
Title:

Bi-directional Transformation between Normalized Systems Elements and Domain Ontologies in OWL

Authors:

Marek Suchánek, Herwig Mannaert, Peter Uhnák and Robert Pergl

Abstract: Knowledge representation in OWL ontologies gained a lot of popularity with the development of Big Data, Artificial Intelligence, Semantic Web, and Linked Open Data. OWL ontologies are very versatile, and there are many tools for analysis, design, documentation, and mapping. They can capture concepts and categories, their properties and relations. Normalized Systems (NS) provide a way of code generation from a model of so-called NS Elements resulting in an information system with proven evolvability. The model used in NS contains domain-specific knowledge that can be represented in an OWL ontology. This work clarifies the potential advantages of having OWL representation of the NS model, discusses the design of a bi-directional transformation between NS models and domain ontologies in OWL, and describes its implementation. It shows how the resulting ontology enables further work on the analytical level and leverages the system design. Moreover, due to the fact that NS metamodel is metacircular, the transformation can generate ontology of NS metamodel itself. It is expected that the results of this work will help with the design of larger real-world applications as well as the metamodel and that the transformation tool will be further extended with additional features which we proposed.

Paper Nr: 26
Title:

Towards Web Application Security by Automated Code Correction

Authors:

Ricardo Morgado, Ibéria Medeiros and Nuno Neves

Abstract: Web applications are commonly used to provide access to the services and resources offered by companies. However, they are known to contain vulnerabilities in their source code, which, when exploited, can cause serious damage to organizations, such as the theft of millions of user credentials. For this reason, it is crucial to protect critical services, such as health care and financial services, with safe web applications. Often, vulnerabilities are left in the source code unintentionally by programmers because they have insufficient knowledge on how to write secure code. For example, developers many times employ sanitization functions of the programming language, believing that they will defend their applications. However, some of those functions do not invalidate all attacks, leaving applications still vulnerable. This paper presents an approach and a tool capable of automatically correcting web applications from relevant classes of vulnerabilities (XSS and SQL Injection). The tool was evaluated with both benchmark test cases and real code, and the results are very encouraging. They show that the tool can insert safe and right corrections while maintaining the original behavior of the web applications in the vast majority of the cases.

Paper Nr: 32
Title:

A Set of Empirically Validated Development Guidelines for Improving Node-RED Flows Comprehension

Authors:

Diego Clerissi, Maurizio Leotta and Filippo Ricca

Abstract: Internet of Things (IoT) systems are rapidly gaining importance in the human society, providing a variety of services to improve the quality of our lives, involving complex and safety-critical tasks; therefore, assuring their quality is of paramount importance. Node-RED is a Web-based visual tool inspired by the flow-based programming paradigm, built on Node.js, and recently emerged to support the users in developing IoT systems in a simple manner. The community behind Node-RED is quite active and encourages artefacts sharing. Thus, the Node-RED flows developed and submitted to public usages should be easy to comprehend and integrate within already existing systems, also in preparation of future maintenance and testing activities. Unfortunately, no consolidated approaches or guidelines to develop comprehensible Node-RED flows currently exist. In this paper, we propose a set of guidelines to help the Node-RED developers in producing flows that are easy to comprehend and use. We have designed and conducted an experiment to evaluate the effect of the guidelines in Node-RED flows comprehension. Results show that the adoption of the guidelines significantly reduces the number of errors (p-value = 0.00903) and the time required to comprehend Node-RED flows (p-value = 0.04883).

Paper Nr: 34
Title:

Longitudinal Evaluation of Open-source Software Maintainability

Authors:

Arthur-Jozsef Molnar and Simona Motogna

Abstract: We present a longitudinal study on the long-term evolution of maintainability in open-source software. Quality assessment remains at the forefront of both software research and practice, with many models and assessment methodologies proposed and used over time. Some of them helped create and shape standards such as ISO 9126 and 25010, which are well established today. Both describe software quality in terms of characteristics such as reliability, security or maintainability. An important body of research exists linking these characteristics with software metrics, and proposing ways to automate quality assessment by aggregating software metric values into higher-level quality models. We employ the Maintainability Index, technical debt ratio and a maintainability model based on the ARiSA Compendium. Our study covers the entire 18 year development history and all released versions for three complex, open-source applications. We determine the maintainability for each version using the proposed models, we compare obtained results and use manual source code examination to put them into context. We examine the common development patterns of the target applications and study the relation between refactoring and maintainability. Finally, we study the strengths and weaknesses of each maintainability model using manual source code examination as the baseline.

Paper Nr: 35
Title:

Systematic Treatment of Security Risks during Requirements Engineering

Authors:

Roman Wirtz and Maritta Heisel

Abstract: In recent years, a significant number of security breaches have been reported. A security breach can lead to value loss for stakeholders, not only financially but also in terms of reputation loss. The likelihood and consequnce of a scenario, impacting security of software, constitute a risk level. Risk management describes coordinated activities to identify, evaluate, and treat risks. Following the principle of security-by-design and treating risks as early as possible during software development, the costs can be reduced significantly. Based on our previous work to identify and evaluate risks, we aim to assist developers in treating risks in one of the earliest phases, i.e. during requirements engineering. To do so, we propose a stepwise method that allows selecting and documenting suitable countermeasures, i.e. controls. As input, it takes a requirements model and a CORAS security model. A distinguishing feature of our method is that we use patterns in the form of templates to evaluate the effectiveness of controls. Furthermore, we integrate the selected controls into the requirements model following an aspect-oriented approach. The resulting model can be used as input for the design phase, thus helping to create an architecture that considers security right from the beginning.

Paper Nr: 42
Title:

Impact of Combining Syntactic and Semantic Similarities on Patch Prioritization

Authors:

Moumita Asad, Kishan K. Ganguly and Kazi Sakib

Abstract: Patch prioritization means sorting candidate patches based on the probability of correctness. It helps to minimize the bug fixing time and maximize the precision of an automated program repair technique by ranking the correct solution before incorrect one. Recent program repair approaches have used either syntactic or semantic similarity between faulty code and fixing ingredient to prioritize patches. However, the impact of combined approach on patch prioritization has not been analyzed yet. For this purpose, two patch prioritization methods are proposed in this paper. Genealogical and variable similarity are used to measure semantic similarity since these are good at differentiating between correct and incorrect patches. Two popular metrics namely normalized longest common subsequence and token similarity are considered individually for capturing syntactic similarity. To observe the combined impact of similarities, the proposed approaches are compared with patch prioritization techniques that use either semantic or syntactic similarity. For comparison, 246 replacement mutation bugs from historical bug fixes dataset are used. Both methods outperform semantic and syntactic similarity based approaches, in terms of median rank of the correct patch and search space reduction. In 11.79% and 10.16% cases, the combined approaches rank the correct solution at first position.

Paper Nr: 54
Title:

Artificial Intelligence in Software Test Automation: A Systematic Literature Review

Authors:

Anna Trudova, Michal Dolezel and Alena Buchalcevova

Abstract: Artificial intelligence (AI) has made a considerable impact on the software engineering field, and the area of software testing is not an exception. In theory, AI techniques could help to achieve the highest possible level of software test automation. The goal of this Systematic Literature Review (SLR) paper is to highlight the role of artificial intelligence in the software test automation area through cataloguing AI techniques and related software testing activities to which the techniques can be applied. Specifically, the potential influence of AI on those activities was explored. To this end, the SLR was performed with the focus on research studies reporting the implementation of AI techniques in software test automation. Out of 34 primary studies that were included in the final set, 9 distinct software testing activities were identified. These activities had been reportedly improved by applying the AI techniques mostly from the machine learning and computer vision fields. According to the reviewed primary studies, the improvement was achieved in terms of reusability of test cases, manual effort reduction, improved coverage, improved fault and vulnerability detection. Several publicly accessible AI-enhanced tools for software test automation were discovered during the review as well. Their short summary is presented.

Paper Nr: 71
Title:

A Workflow for Automatically Generating Application-level Safety Mechanisms from UML Stereotype Model Representations

Authors:

Lars Huning, Padma Iyenghar and Elke Pulvermueller

Abstract: Safety-critical systems operate in contexts where failure may lead to serious harm for humans or the environment. Safety standards, e.g., IEC 61508 or ISO 26262, provide development guidelines to improve the safety of such systems. For this, they recommend a variety of safety mechanisms to mitigate possible safety hazards. While these standards recommend certain safety mechanisms, they do not provide any concrete development or implementation assistance for any of these techniques. This paper presents a detailed workflow, how such safety mechanisms may be automatically generated from UML model representations in a model-driven development process. We illustrate this approach by applying it to the modeling and automatic generation of voting mechanisms, which are a wide-spread safety mechanism in safety-critical systems that employ some form of redundancy for fault detection or fault masking. Finally, we study the scalability of the proposed code generation via quantitative experiments.

Paper Nr: 99
Title:

Towards Human-centric Model-driven Software Engineering

Authors:

John Grundy, Hourieh Khalajzadeh and Jennifer Mcintosh

Abstract: Many current software systems suffer from a lack of consideration of the human differences between end users. This includes age, gender, language, culture, emotions, personality, education, physical and mental challenges, and so on. We describe our work looking to consider these characteristics by incorporation of human centric-issues throughout the model-driven engineering process lifecycle. We propose the use of the co-creational ”living lab” model to better collect human-centric issues in the software requirements. We focus on modelling these human-centric factors using domain-specific visual languages, themselves human-centric modelling artefacts. We describe work to incorporate these human-centric issues into model-driven engineering design models, and to support both code generation and run-time adaptation to different user human factors. We discuss continuous evaluation of such human-centric issues in the produced software and feedback of user reported defects to requirements and model refinement.

Short Papers
Paper Nr: 4
Title:

Incremental Bidirectional Transformations: Evaluating Declarative and Imperative Approaches using the AST2Dag Benchmark

Authors:

Matthias Bank, Sebastian Kaske, Thomas Buchmann and Bernhard Westfechtel

Abstract: Model transformation are the core of model-driven software engineering. Typically an initial model is refined throughout the development process using model transformations to derive subsequent models until eventually code is generated. In round-trip engineering processes, these model transformations are performed not only in forward, but also in backward direction. To this end, bidirectional transformation languages provide a single transformation definition for both directions. This paper evaluates the transformation languages QVT Relations (QVT-R) which allows to specify incremental bidirectional transformations declaratively at a high level of abstraction and BXtend - a framework for procedural specification of both forward and backward transformation in a single rule set. Both languages have been used to implement the AST2Dag transformation example. The benchmarx framework was used for a quantitative and qualitative evaluation of the obtained results.

Paper Nr: 16
Title:

Preference-based Conflict Resolution for Collaborative Configuration of Product Lines

Authors:

Sabrine Edded, Sihem Ben Sassi, Raul Mazo, Camille Salinesi and Henda Ben Ghezala

Abstract: In the context of Product lines, the collaborative configuration process gets complicated when the configuration decisions of involved stakeholders are contradictory, which may lead to conflicting situations. Although considerable research has been devoted to collaborative configuration, little attention has been paid to conflict resolution. Moreover, most of existing approaches rely on a systematic process which constraint decisions of some stakeholders. In this paper, we propose a new collaborative configuration approach which allows conflict resolution based on stakeholders preferences expressed through a set of substitution rules. Based on such preferences, we delete the minimal set of conflicting configuration decisions which are identified using the Minimal Correction Subsets (MCSs) computing algorithm. An illustrating example and a tool prototype are presented to evaluate the applicability of our approach.

Paper Nr: 25
Title:

Language-oriented Sentiment Analysis based on the Grammar Structure and Improved Self-attention Network

Authors:

Hien D. Nguyen, Tai Huynh, Suong N. Hoang, Vuong T. Pham and Ivan Zelinka

Abstract: In the businesses, the sentiment analysis makes the brands understanding the sentiment of their customers. They can know what people are saying, how they’re saying it, and what they mean. There are many methods for sentiment analysis; however, they are not effective when were applied in Vietnamese language. In this paper, a method for Vietnamese sentiment analysis is studied based on the combining between the structure of Vietnamese language and the technique of natural language processing, self-attention with the Transformer architecture. Based on the analysing of the structure of a sentence, the transformer is used to process the word positions to determine the meaning of that sentence. The experimental results for Vietnamese sentiment analysis of our method is more effectively than others. Its accuracy and F-measure are more than 91% and its results are suitable to apply in practice for business intelligence.

Paper Nr: 27
Title:

Assisted Generation of Privacy Policies using Textual Patterns

Authors:

Nazila G. Mohammadi, Jens Leicht, Ludger Goeke and Maritta Heisel

Abstract: To comply with data protection legislation, privacy policies are a widely used approach and an important legal foundation for data handling. These policies are created by service providers. The creation of a privacy policy for a service is time consuming and compliance with legislation is hard to ensure. According to the General Data Protection Regulation of the European Union, service providers should provide a transparent privacy policy in a comprehensible way for end-users. This paper provides an approach for Assisted Generation of Privacy Policies Using Textual Patterns. We also provide a proof of concept implementation of a tool for the privacy policy generation approach. The proposed approach supports service providers in their task of providing a comprehensible privacy policy which allows better transparency.

Paper Nr: 28
Title:

Using Model-based Approach for Assessing Standard Operating Procedures

Authors:

Mohammad Alhaj

Abstract: Standard Operating Procedures are essential for organizations that want to maintain an efficient and organized local tasks. They are used develop the activities important for completing their internal operations in accordance with industry regulations and business standards. A growing interest and investment have been taken regarding developing and assessing the SOPs. It is desirable to detect any deficiencies in the SOPs that contradict with the policies and regulations and do not allow achieving the business goals early in the process when corrective actions can be more easily made. This paper proposes a model-based approach that generates goal and scenario models for the standard operating procedures, augmented with quantitative indicators. The generated models allow to model the behaviour of the standard operating procedures in a formal way and evaluate their performance measures.

Paper Nr: 33
Title:

Predicting User Satisfaction in Software Projects using Machine Learning Techniques

Authors:

Łukasz Radliński

Abstract: User satisfaction is an important aspect of software quality. Factors of user satisfaction and its impact on project success were analysed in various studies. However, very few studies investigated the ability to predict user satisfaction. This paper presents results of such challenge. The analysis was performed with the ISBSG dataset of software projects. The target variable, satisfaction score, was defined as a sum of eight variables reflecting different aspects of user satisfaction. Twelve machine learning algorithms were used to build 40 predictive models. Each model was evaluated on 20 passes with a test subset. On average, a random forest model with missing data imputation by mode and mean achieved the best performance with the macro mean absolute error of 1.88. Four variables with the highest importance on predictions for this model are: survey respondent role, log(effort estimate), log(summary work effort), and proportion of major defects. On average 14 models performed worse than a simple baseline model. While best performing models deliver predictions with satisfactory accuracy, high variability of performance between different model variants was observed. Thus, a careful selection of model settings is required when attempting to use such model in practise.

Paper Nr: 40
Title:

Migration of Monolith Applications to Miniservices: A Case Study from the Telecom Domain

Authors:

Ümit Kanoğlu, Ali İmre and Oumout Chouseinoglou

Abstract: More organizations are considering the transformation of their existing monolithic applications to microservices in order to increase competitiveness and utilize the benefits of new software architectures which meet their business needs. However, due to detailed and extensive requirements of the microservice architecture (MSA), organizations either implement microservices at different granularity levels or decide not to undertake this migration even though the business need is evident. Miniservices have been proposed as an intermediate alternative between monoliths and microservices, with a larger scope of services and more relaxed architectural constraints. This paper introduces the concept of miniservice architecture (MnSA) to the industry domain, proposes a methodology to be implemented for the migration of a monolith application to MnSA and shows the applicability of this methodology with a detailed case study from the telecom domain.

Paper Nr: 50
Title:

Automated Evaluation and Narratives in Computer Science Education

Authors:

Zsigmond Imre, Andrei Zamfirescu and Horia F. Pop

Abstract: For university level computer science teachers assignment verification and validation uses disproportionate amount of time. This leaves them with little time to help struggling students or for newer teaching techniques. We set out o automate the tedious work, and incorporate instant feedback and narrative gamification mechanics. During our semester long study our solution freed up a lot of time. The result suggest that more research, and more gamification mechanics are warranted.

Paper Nr: 57
Title:

Software Quality Observation Model based on the ISO/IEC 29110 for Very Small Software Development Entities

Authors:

Alexander Redondo-Acuña and Beatriz Florian-Gaviria

Abstract: An imbalance exists between quality of software development for researchers on the one hand, and productivity for software industry on the other hand. However, clients demand to have both. So, this is a gap between researchers and the software industry. Therefore, it is necessary to attune software quality research to the productivity. Also it is necessary that software industry can understand the benefit of incorporating quality practices bonded to productivity. This paper proposes an observation model that allows to model internal practices of a small software development organization in comparison to those described in the ISO/IEC 29110 standard. It consists of four main components. First a visual frame of three axes: 1) the process domains and subdomains based on the profile process; 2) Roles and 3) Maturity level. Second, a battery of indicators on this three-dimensional visual frame. Third, a series of surveys designed for primary data collection from employees performing roles of the model in Very Small Entities (VSEs), and fourth, results of surveys allow disclosing values to compute metrics of indicators.

Paper Nr: 63
Title:

Aspect Weaving for Multiple Video Game Engines using Composition Specifications

Authors:

Ben J. Geisler and Shane L. Kavage

Abstract: In the realm of video game development, unique Domain Specific Languages (DSL’s) are used in each of the most popular game engines making code sharing and reuse extremely difficult. For this reason, common software engineering practices such as design patterns and modularity have lagged. GAMESPECT is an aspect-oriented DSL (DSAL) that seeks to generalize concerns of video game programming. This paper explores the technology involved, namely composition specifications which enable the usage of XText and TXL to weave aspect code into multiple game engines and multiple languages. We describe the four main steps of the weaving process: reification, matching, ordering and mixing. Our results demonstrate the technical accuracy of the DSAL as well as the efficiency across several samples in Unreal Game Engine 4(UE4) and Unity. The DSAL employed is a single-to-many source language featuring transformation and aspect insertion (via weaving) to multiple languages in these engines including C++, Skookum Script, LUA, and C#. The GAMESPECT technology has been employed beneficially in modern video game development across active titles on the PC, Android and Nintendo Switch.

Paper Nr: 68
Title:

Automated End-to-End Timing Analysis of AUTOSAR-based Causal Event Chains

Authors:

Padma Iyenghar, Lars Huning and Elke Pulvermueller

Abstract: Reflecting the ever-increasing complexity of automotive embedded systems and their stringent timing requirements, a significant level of automated tooling is imperative to enhance software quality. In this context, though AUTOSAR-based systems are nowadays more often developed using powerful Unified Modeling Language (UML) tools, there is a lack of an integrated approach and automated tooling for validation of timing properties. Addressing this aspect, a systematic software engineering and automation approach for timing analysis of AUTOSAR-based systems, already early in the design stages, in state-of-the-practice timing analysis tools is presented. A specific example of a widely used timing analysis measure for automotive systems namely, end-to-end delay analysis is described with the help of an AUTOSAR-based causal event chain example. The practical relevance of the approach is demonstrated by evaluating it in an elaborate automotive case study.

Paper Nr: 69
Title:

SAS vs. NSAS: Analysis and Comparison of Self-Adaptive Systems and Non-Self-Adaptive Systems based on Smells and Patterns

Authors:

Claudia Raibulet, Francesca A. Fontana and Simone Carettoni

Abstract: Self-Adaptive Systems are usually built of a managed part, implementing their functionality, and a managing part, implementing their self-adaptation. The complexity of self-adaptive systems results also from the existence of the managing part and the interaction between the managed and the managing parts. The non-self-adaptive systems may be seen as the managed part of self-adaptive systems. The self-adaptive systems are evaluated based on their performances resulted from the self-adaptation. However, self-adaptive systems are software systems, hence, also their software quality is equally important. Our analysis compares the internal quality of self-adaptive and non-self-adaptive systems by considering code smells, architectural smells, and GoF’s design patterns. This comparison provides an insight to the health of the self-adaptive systems with respect to the non-self-adaptive systems (the last being considered as a quality reference).

Paper Nr: 73
Title:

Microservice Decompositon: A Case Study of a Large Industrial Software Migration in the Automotive Industry

Authors:

Heimo Stranner, Stefan Strobl, Mario Bernhart and Thomas Grechenig

Abstract: In a microservice architecture a set of relatively small services is deployed, who communicate with each other only over the network. Monoliths regularly suffer from poor scalability and maintainability. Several approaches for decomposing them into microservices have been proposed with the aim to improve these characteristics. However, precise descriptions of these approaches in combination with large scale industrial evaluations are still rare in academic literature. This case study focuses on a large ERP system in the automotive industry. We applied an approach based on the concept of bounded contexts for one such decomposition and documented necessary changes to the system, like the introduction of facades to facilitate incremental migration towards microservices in a non-distruptive manner. Further we conduct expert interviews to evaluate our findings. While the migration is still ongoing, we were able to achieve significant adoption rates of the new paradigm and a clear preference of architects and developers to use it. Development speed has also drastically improved.

Paper Nr: 74
Title:

A User Centered Approach in Designing Computer Aided Assessment Applications for Preschoolers

Authors:

Adriana-Mihaela Guran, Grigoreta-Sofia Cojocar and Anamaria Moldovan

Abstract: The children of nowadays are growing surrounded by technology. The appropriate use of technology is important in determining their attitude towards it, and education should support the right approach in this sense. Adjusting the teaching and evaluation methods to the current trends is necessary, starting from very young ages. In this paper, we present a User Centered Approach for developing Computer-Aided Assessment applications for preschoolers from our country. We describe our approach and present a case study of applying it, together with discussions about the challenges, lessons learned and future development.

Paper Nr: 77
Title:

Combining Semi-formal and Formal Methods for Safety Control in Autonomous Mobility-on-Demand Systems

Authors:

Mohamed Naija, Rihab Khemiri and Ernesto Exposito

Abstract: Ensuring the safety control of Autonomous Mobility-on-Demand systems is one of the biggest challenges facing designers to successful deployment. The addition of adaptability to such systems further hardens and delays modelling and validating phase, especially due to the current lack of design models and tools. The formal methods have proven to be useful for making the development process reliable at early design stages. Based on this approach, this paper proposes a mixed process to specify, design and verify safety requirements in adaptive AMoD Systems. This process provides analytical proofs of safety requirements during the design stage of a system when changes are cheap. This contribution deals with combining the UML MARTE profile for modelling the workload behaviour of the system and the formalism Net Condition Event System for consistency validation of safety properties. To verify the effectiveness of our proposal, several formal analyses are carried out using the model checker SESA. The evaluation of the proposed architecture, simulated by the Sumo software, proves the impact of the number of autonomous vehicles on the global performance and the intended quality of service (QoS) in the framework of the TORNADO project.

Paper Nr: 79
Title:

An Experience in Collecting Requirements for Mobile, Energy Efficient Applications from End Customers in the Bank Sector

Authors:

Vladimir Ivanov, Pavel Kolychev, Sergey Masyagin, Giancarlo Succi, Rafael Valeev and Vasilii Zorin

Abstract: Several development processes recommend strongly user participation and involvement in requirement acquisition. However, there are very few studies detailing the empirical results of direct user involvement in large industrial software development products. In this paper we report the outcomes of a novel approach taken by the Software House of one of major Russian banks (Ak Bars Bank) on how to improve the development process by directly involving end customers in the requirement elicitation phase of mobile, energy efficient applications. We observe that such involvement in a form of a workshop has led to improvement of requirements collection and higher levels of user satisfaction.

Paper Nr: 80
Title:

ATDx: Building an Architectural Technical Debt Index

Authors:

Roberto Verdecchia, Patricia Lago, Ivano Malavolta and Ipek Ozkaya

Abstract: Architectural technical debt (ATD) in software-intensive systems refers to the architecture design decisions which work as expedient in the short term, but later negatively impact system evolvability and maintainability. Over the years numerous approaches have been proposed to detect particular types of ATD at a refined level of granularity via source code analysis. Nevertheless, how to gain an encompassing overview of the ATD present in a software-intensive system is still an open question. In this study, we present a multi-step approach designed to build an ATD index (ATDx), which provides insights into a set of ATD dimensions building upon existing architectural rules by leveraging statistical analysis. The ATDx approach can be adopted by researchers and practitioners alike in order to gain a better understanding of the nature of the ATD present in software-intensive systems, and provides a systematic framework to implement concrete instances of ATDx according to specific project and organizational needs.

Paper Nr: 81
Title:

On the Need for a Formally Complete and Standardized Language Mapping between C++ and UML

Authors:

Johannes Trageser

Abstract: This paper presents a vision of a well-integrated solution for implementing (embedded) software with a model-driven approach by using UML as a semantic and conceptual extension to C++ without losing support for established concepts, tools and libraries of C++. This requires a formally complete and standardized language mapping between relevant and bounded subsets of C++ and UML as the foundation for a bidirectional-translational approach between those two languages, and appropriate tooling that puts this approach into practice. The standardized mapping is prerequisite for a model exchange among different tools.

Paper Nr: 84
Title:

Gamification based Learning Environment for Computer Science Students

Authors:

Imre Zsigmond, Maria I. Bocicor and Arthur-Jozsef Molnar

Abstract: In the present paper we propose an integrated system that acts as a gamification-driven learning environment for computer science students. Gamification elements have been successfully applied in many fields, resulting in increased involvement of individuals and improved outcomes. Our idea is to employ a well-known aspect of gamification - awarding badges, to students who solve their assignments while observing best practices. The system is deployed as a standalone server having a web front-end through which students submit assignment source code. The system checks submissions for plagiarism using Stanford’s MOSS, and statically analyzes it via SonarQube, where a custom set of rules is applied. Finally, the program is executed in a sandboxed environment with input/output redirection and a number of predefined test cases. Badges are awarded based on the results of static and dynamic analyses. Components of the proposed system were previously evaluated within several University computer science courses and their positive impact was noted by both students and teaching staff.

Paper Nr: 89
Title:

Commit–Defect and Architectural Metrics–based Quality Assessment of C Language

Authors:

Devansh Tiwari, Hironori Washizaki, Yoshiaki Fukazawa, Tomoyuki Fukuoka, Junji Tamaki, Nobuhiro Hosotani, Munetaka Kohama, Yann-Gaël Guéhéneuc and Foutse Khomh

Abstract: The foundation of any software system is its design and architecture. Maintaining and improving the architecture and design as systems grow are difficult tasks. Many studies on the architecture and design of object-oriented systems exist but only few studies pertain to the architecture and design of procedural systems. Herein we study the quality of systems for the C language, and investigate how dependencies and associated metrics among files, functions, and modules are related to defects. We also investigate whether a set of static, dependency, and social-network metrics are related to problems in the architecture. Additionally, we examine the bug fixing commits from the commit history and the relations among bug-fixing commits and metrics. Thirteen open source systems from trending GitHub projects are used for study. We found that files with a high number of bug fixing commits are correlated to higher cycles and centrality, indicating that key files of the architecture in C systems are the same files causing issues in the development process. We identify some version releases having huge impact on architecture and files which could be considered at high risk and need more attention.

Paper Nr: 98
Title:

Performance-based Refactoring of Web Application: A Case of Public Transport

Authors:

Anna Derezinska and Krzysztof Kwaśnik

Abstract: Performance issues are, among other quality attributes, important factors of web applications devoted to public services. Performance-based refactoring concerns program quality improvement when functional requirements but also selected non-functional requirements, such as clarity, user-friendliness, and security issues, remain preserved. We have examined three independent web applications supporting card processing for public transport widely used in different provinces. Based on the experience, a new web application has been developed. While using the Single Page Application approach it has been aimed at easing a client interaction. Further refactoring helped in the performance improvement. The general performance has been compared to those three applications. Benefits of the refactoring have been evaluated and discussed.

Paper Nr: 12
Title:

Efficient Decorator Pattern Variants through C++ Policies

Authors:

Virginia Niculescu

Abstract: C++ Policies represent an interesting metaprogramming mechanism that allows behavior infusion in a class. In this paper, we will investigate the possibility of using them for the implementation of some extended Decorator pattern variants. For the MixDecorator variant, policies are used to simulate extension methods, which are needed for implementation. Beside this, other two alternatives for Decorator pattern are proposed: one purely based on inheritance, and another that is a hybrid one; the last one wraps an object with decorations defined as a linear hierarchy of classes – introduced using policies. This is possible since a policy introduces a kind of recursive definition for inheritance. The advantages and disadvantages of these variants are analyzed on concrete examples. The hybrid variant presents many important advantages that qualify it as a valid and efficient Decorator pattern variant – HybridDecorator.

Paper Nr: 15
Title:

Patterns for Checking Incompleteness of Scenarios in Textual Requirements Specification

Authors:

David Šenkýř and Petr Kroha

Abstract: In this contribution, we investigate the incompleteness problem in textual requirements specifications. Missing alternative scenarios are one of the incompleteness sources, i.e., descriptions of processing in the cases when something runs in another way as expected. We check the text of requirements specification using linguistic patterns, and we try to reveal scenarios and alternative scenarios. After that process is finished, we decide whether the set of alternative scenarios is complete. As a result, we generate warning messages. We illustrate our approach with examples.

Paper Nr: 17
Title:

Usage of UML Combined Fragments in Automatic Function Point Analysis

Authors:

Ilona Bluemke and Agnieszka Malanowska

Abstract: Combined fragments, introduced in UML 2.0 and allowing to express complex communication scenarios in sequence diagrams, are rarely the subject of research. In this paper, we present a method to transform nine of UML 2.x combined fragments, i.e. alt, opt, break, neg, ignore, consider, assert, strict and critical, into the set of interaction variants. Our proposition takes advantage of the simple fact that each sequence diagram containing any number of combined fragments can be replaced with some number of simpler diagrams representing single scenarios and not containing any combined fragments. This transformation can be fully automated. Our method was developed as a pre-processing stage in the automatic FPA analysis, which is used in test effort estimation approach, but can be used independently as well.

Paper Nr: 39
Title:

Smart Grid Reconfiguration based on Prediction Model for Technical Teams Intervention Integration and Recovery Enhancement

Authors:

Leila Ziouche, Syrine Ben Meskina, Mohamed Khalgui and Laid Kahoul

Abstract: To overcome the problem of critical failures recovery and improve reliability, quality of service and recovery performance, it is essential to provide and apply a new oriented solution for smart grid reconfiguration. This solution allows for resolving the problem of the late intervention of technical teams and the insufficiency of energy for recovery, by implementing a prediction model that assists the integration of a number of technical teams. In addition, it estimates the newly added number of emergency lines coming from new integrated renewable sources. This heuristic is programmed based on the linear programming and the simplex algorithm. This approach is implemented in python as a tool called SGREP, then tested and validated at run-time on four real different smart grids. Thereby, the proposed solution improves the guaranteed gains in terms of power availability, waiting time and financial cost.

Paper Nr: 41
Title:

A Novel Approach for Repairing Reconfigurable Hierarchical Timed Automata

Authors:

Roufaida Bettira, Laid Kahloul and Mohamed Khalgui

Abstract: Timed Automata (TA) is a formalism for formal modeling and verification of systems with temporal requirements. Reconfigurable hierarchical timed automata (RHTA) extend TA to cover reconfigurability and hierarchy of large reconfigurable discrete event control systems (RDECS). After formal modeling of an RDECS with RHTA, formal verification against functional properties is done using model-checker. In the case of non-satisfaction of a property, the model-checker generates a counterexample. Mostly, non-satisfaction of a functional property is owing to incorrect clock constraints (guards and invariants). In this paper, we propose an approach based on mutation testing for repairing the faulty RHTA model so that the concerned functional property be satisfied. First, the hierarchy structure of each configuration is tested and repaired. Then, the generated counterexample is used to repair the wrong guards specified in TA models which are constructing the RHTA model. Experimentation shows that the proposed approach is able to repair a considerable part of the RHTA model designed initially.

Paper Nr: 44
Title:

State Management and Software Architecture Approaches in Cross-platform Flutter Applications

Authors:

Michał Szczepanik and Michał Kędziora

Abstract: Flutter is an open-source cross-platform development framework. It is used to develop applications for Android, iOS, Windows, Mac, Linux, and web. This technology was released on December 4, 2018, and it is quite young technology with a lack of good architectural patterns and concepts. In this paper authors compared state management approaches used for Flutter applications development and architecture. They also proposed a combination of two approaches that solve the main problem of existing approaches related to global and local state management. The proposed solution can be used for development even complex and big Flutter applications.

Paper Nr: 48
Title:

Ontology based UX Personalization for Gamified Education

Authors:

Zsigmond Imre

Abstract: Gamification techniques are increasingly used in education, both in private and public sectors. These game design elements need to be carefully tailored to the students, considering a variety of factors if we want positive results. The key hindrance is the lack of systematic basic research in mapping out the connections between the metrics of the student and the game mechanics used. In this paper we present a conceptual framework for gamification implementation improvement. We show how ontologies and ontology-based reasoning can improve the basic research and the application of gamification in education.

Paper Nr: 49
Title:

Data Lake and Digital Enterprise

Authors:

Oumaima El Haddadi, Mahmoud El Hamlaoui, Dkaki Taoufiq and Mahmoud Nassar

Abstract: Due to the digital transformation and huge amount of publicly available data, decision support systems are becoming highly useful in helping to defining, managing and improving business strategies and objectives. Indeed, data is a key asset and a key competitive differentiator for all organizations. This newly available data has changed traditional data processing and created new challenges related to the velocity, volume and variety of data. To address these challenges related to the storage of heterogeneous data and to provide the ability of rapid data processing, we explore the data lake paradigm. In this paper, we present the state-of-the-art of Data Lake systems and highlight their major advantages and drawbacks. We also will propose a solution to improve Data Lake System.

Paper Nr: 55
Title:

From BPMN to Sequence Diagrams: Transformation and Traceability

Authors:

Aljia Bouzidi, Nahla Z. Haddar, Mounira Ben-Abdallah and Kais Haddar

Abstract: A business cannot be competitive unless its business process is aligned with its information system. Indeed, a perfect alignment is key to a coherent management and success of the business. Therefore, it is important to bring closer business process- and IS modeling activities. The current paper presents an approach to derive a dynamic software model from a business process model, including the trace links between source and target elements. Our approach is based on a set of rules that transform a BPMN business process model into a UML sequence diagram structured according to the model view controller design pattern, and a trace model. To show the feasibility of approach in the practice, we developed a tool that implements the transformation rules.

Paper Nr: 66
Title:

Multi Software Product Lines: A Systematic Mapping Study

Authors:

Pasquale Ardimento, Nicola Boffoli and Giuseppe Superbo

Abstract: Even if Software Product Line (SPL) is an established technique in software engineering, there are several limitations of its use. This is mainly caused by the exponential increase of software systems complexity and by the high pace of software and hardware evolution. Many of these limitations have been studied and faced by applying the Multi Software Product Line (MSPL), an extension of the SPL. MSPL is an emerging and novel technique based on using more than one SPL to derive a functional product system. This paper aims to characterize the state of the art of MSPL, the main goal is to highlight the reached achievement in this field and then discuss about the open issues or the not covered aspects of this approach. In order to make an overall analysis inside the research community, a well-defined method of systematic mapping is applied in order to classify, in a proper scheme, every paper strictly related to this topic. These classified results could give a valid hint about lacks which should be investigated further.

Paper Nr: 87
Title:

Capturing Tracing Data Life Cycles for Supporting Traceability

Authors:

Dennis Ziegenhagen, Elke Pulvermueller and Andreas Speck

Abstract: Activities for achieving traceability in software development projects include planning, implementing, using and maintaining a suitable strategy. Current research aims at supporting these activities by automating the involved tasks, processes and applications. In this paper, we present a concept for developing a flexible framework which enables the integration of respective functional modules, e.g. artifact data extractors and trace link generators, to form traceability environments according to the project’s demands. By automating the execution of the framework’s components and monitoring artifact-related interactions between developers and their tools, the tracing data’s life cycle is captured and provided for further usages. This paper presents an exemplified framework setup which is used to demonstrate the path and enrichment of tracing data along these components. Furthermore, we discuss observations and findings which we made during defining and realizing the example. We aim at using this information to further improve the framework in order to support the implementation of traceability environments.

Paper Nr: 88
Title:

Understanding Interaction and Communication Challenges Present in Software Engineering

Authors:

Sergey Masyagin, Giancarlo Succi, Sofiia Yermolaieva and Nadezhda Zagvozkina

Abstract: Researchers have largely identified that interactions and communications pose major challenges in software development, especially when extracting requirements. However, they have not appreciated the sources and the depth of them, thus approaching them with mechanisms that have not (fully) achieved the desired objectives. In this position, we claim that such challenges can be explained using three major theories coming from social sciences: the theory of verbal and nonverbal communication, systemic theory, and democratic theory. We also argue that some of the successful practices of agile methods can be explained in terms of these theories. Finally, we stipulate that a full appreciation of these theories can result in a significant leap forward in the discipline, identifying new mechanisms that can help to overcome the mentioned challenges, understanding fully what we are doing and why.