ENASE 2011 Abstracts


Full Papers
Paper Nr: 5
Title:

ON THE PREDICTABILITY OF SOFTWARE EFFORTS USING MACHINE LEARNING TECHNIQUES

Authors:

Wen Zhang

Abstract: This paper investigates the predictability of software effort using machine learning techniques. We employed unsupervised learning as k-medoids clustering with different similarity measures to extract natural clusters of projects from software effort data set, and supervised learning as J48 decision tree, back propagation neural network (BPNN) and na¨ive Bayes to classify the software projects. We also investigate the impact of imputing missing values of projects on the performances of both unsupervised and supervised learning techniques. Experiments on ISBSG and CSBSG data sets demonstrate that unsupervised learning as k-medoids clustering has produced a poor performance in software effort prediction and Kulzinsky coefficient has the best performance in software effort prediction in measuring the similarities of projects. Supervised learning techniques have produced superior performances in software effort prediction. Among the three supervised learning techniques, BPNN produces the best performance. Missing data imputation has improved the performances of both unsupervised and supervised learning techniques.

Paper Nr: 16
Title:

A TYPE SAFE DESIGN TO ALLOW THE SEPARATION OF DIFFERENT RESPONSIBILITIES INTO PARALLEL HIERARCHIES

Authors:

Francisco Ortin and Miguel García

Abstract: The Tease Apart Inheritance refactoring is used to avoid tangled inheritance hierarchies that lead to code duplication. This big refactoring creates two parallel hierarchies and uses delegation to invoke one from the other. One of the drawbacks of this approach is that the root class of the new refactored hierarchy should be general enough to provide all its services. This weakness commonly leads to meaningless interfaces that violate the Liskov substitution principle. This paper describes a behavioral design pattern that allows modu-larization of different responsibilities in separate hierarchies that collaborate to achieve a common goal. It allows using the specific interface of each class in the parallel hierarchy, without imposing a meaningless interface to its root class. The proposed design is type safe, meaning that the compile-time type checking ensures that no type error will be produced at runtime, avoiding the use of dynamic type checking and re-flection.

Paper Nr: 17
Title:

INCENTIVES AND PERFORMANCE IN LARGE-SCALE LEAN SOFTWARE DEVELOPMENT - An Agent-based Simulation Approach

Authors:

Benjamin S. Blau, Tobias Hildenbrand, Matthias Armbruster and Martin G. Fassunge

Abstract: The application of lean principles and agile project management techniques in the domain of large-scale software product development has gained tremendous momentum over the last decade. However, a simple transfer of good practices from the automotive industry combined with experiences from agile development on a team level is not possible due to fundamental differences stemming from the particular domain specifics – i.e. different types of products and components (material versus immaterial goods), knowledge work versus production systems as well as established business models. Especially team empowerment and the absence of a a hierarchical control on all levels impacts goal orientation and business optimization. In such settings, the design of adequate incentive schemes in order to align local optimization and opportunistic behavior with the overall strategy of the company is a crucial activity of central importance. Following an agent-based simulation approach with reinforcement learning, we (i) address the question of how information regarding backlog item dependencies is shared within and in between development teams on the product level subject to different incentive schemes. We (ii) compare different incentive schemes ranging from individual to team-based compensation. Based on our results, we are (iii) able to provide recommendations on how to design such incentives, what their effect is, and how to chose an adequate development structure to foster overall software product development flow by means of more economic decisions and thus resulting in a shorter time to market. For calibrating our simulation, we rely on practical experience from a very large software company piloting and implementing lean and agile for about three years.

Paper Nr: 19
Title:

A CRITICAL COMPARISON OF EXISTING SOFTWARE CONTRACT TOOLS

Authors:

Janina Voigt

Abstract: The idea of using contracts to specify interfaces and interactions between software components was proposed several decades ago. Since then, a number of tools providing support for software contracts have been developed. In this paper, we explore eleven such technologies to investigate their approach to various aspects of software contracts. We present the similarities as well as the areas of significant disagreement and highlight the shortcomings of existing technologies. We conclude that the large variety of approaches to even some basic concepts of software contracts indicate a lack of maturity in the field and the need for more research.

Paper Nr: 27
Title:

EXECUTION MEASUREMENT-DRIVEN CONTINUOUS IMPROVEMENT OF BUSINESS PROCESSES IMPLEMENTED BY SERVICES

Authors:

Andrea Delgado and Barbara Weber

Abstract: Continuous improvement of business processes is becoming increasingly important for organizations that need to maintain and improve their business in the current context, and to that end the continuous incorporation of changes to improve it is a key issue. However, an organization that has not defined how to measure and analyze the execution of its business processes is unlikely to have real and reliable information to introduce these changes. Nor will it easily achieve the goals set for the improvement effort that has been set out. MINERVA framework provides a comprehensive guide for the continuous improvement of business processes implemented by services and following principles of model driven development. Among other elements it defines a Business Process Continuous Improvement Process (BPCIP) and a Business Process Execution Measurement Model (BPEMM). In this paper we present the BPCIP and the BPEMM to first identify the business goals defined for business processes; second, to select the appropriate execution measures to be implemented and collect the associated execution information; third, to analyze and evaluate this information, identifying improvement opportunities, and fourth, to integrate these improvements into business processes in a systematic way to achieve the specific improvement results.

Paper Nr: 28
Title:

ADVANCES IN STRUCTURE EDITORS - Do They Really Pay Off?

Authors:

Andreas Gomolka and Bernhard G. Humm

Abstract: Structure editors emphasise a high-fidelity representation of the underlying tree structure of a program, often using a clearly identifiable 1-to-1 mapping between syntax tree elements and on-screen artefacts. This paper presents layout and behaviour principles for structure editors and a new structure editor for Lisp. The evaluation of the editor’s usability reveals an interesting mismatch. Whereas by far most participants of a questionnaire intuitively favour the structure editor to the textual editor, objective improvements are measurable, yet not significant.

Paper Nr: 31
Title:

ROBUSTIFYING THE SCRUM AGILE METHODOLOGY FOR THE DEVELOPMENT OF COMPLEX, CRITICAL AND FAST-CHANGING ENTERPRISE SOFTWARE

Authors:

Marcos Vescovi

Abstract: A class of complex enterprise software including financial, taxation and supply chain management software contains mission critical functionality and change requests are substantial and frequent. Agile methodologies provide the adaptability but not the robustness necessary to deal with the criticality and to avoid software entropy. Task analysis shows that a significant effort of analysis and design is required to flatten the change curve. The Robust Agile Methodology, “R-Agile,” is proposed with the adaptability to handle fast-changing requirements, and the design and test necessary to handle complexity and criticality.

Paper Nr: 32
Title:

AN EVALUATION FRAMEWORK FOR VALIDATING ASPECTUAL PERVASIVE SOFTWARE SERVICES

Authors:

Dhaminda B. Abeywickrama and Sita Ramakrishnan

Abstract: Context-dependent information has several qualities that make pervasive services challenging compared to conventional Web services. Therefore, sound software engineering practices are needed during their development, execution and validation. This paper establishes a framework to evaluate pervasive service-oriented software architectures. The method of evaluation is based on key features comparison. The framework consists of two views: vertical and horizontal. The vertical evaluation compares several research tools to the Aspectual FSP Generation tool developed in this research. The tools are compared across the platformindependent and platform-specific levels of the model-driven architecture. The horizontal evaluation view is designed to validate several desired key features that are mainly required at the platform-specific level of the service specification. These criteria mainly cover two aspects: formal methods and tools employed, and the context and adaptation dimensions of the customization approach used in the services. The vertical evaluation has demonstrated that the Aspectual FSP Generation tool has unique features in context-dependent behavioral modeling and code generation. The horizontal evaluation has shown that the formal methods and tools employed, and the customization approach used in the services are effective towards the overall objectives of this research. The approach is explored using a real-world case study in intelligent transport.

Paper Nr: 33
Title:

AN ADAPTABLE BUSINESS COMPONENT BASED ON PRE-DEFINED BUSINESS INTERFACES

Authors:

Oscar M. Pereira

Abstract: Object-oriented and relational paradigms are simply too different to bridge seamlessly. Architectures of database applications relying on three tiers need business tiers to bridge application tiers and database tiers. Business tiers hide all the complexity to convert data between the other two tiers easing this way programmers’ work. Business tiers are critical components of database applications not only for their role but also for the effort spent on their development and their maintenance. In this paper we propose an adaptable business component (ABC) able to manage SQL statements on behalf of other components. Other components may create in run-time a pool of SQL statements of any complexity and delegate their management to the ABC component. The only constraint is that the SQL statements schema must be in conformance with one of the predefined schemas (interfaces) provided by the ABC component. The main contribution of this paper is twofold: 1) the presentation of an adaptable business component and 2) to show that the ABC source code may be automatically generated. The main outcome of this paper is the evidence that the ABC component is an effective alternative approach to build business tiers to bridge object-oriented and relational paradigms.

Paper Nr: 34
Title:

BUSINESS PROCESS MODEL IMPROVEMENT BASED ON MEASUREMENT ACTIVITIES

Authors:

Laura Sánchez-González and Francisco Ruiz

Abstract: The current importance of Business Process improvement lies in the fact that it is a key aspect for organizational improvement. Since business process improvement can be dealt from different perspectives, we propose the use of measurement as a technique by which to collect information concerning the quality of the process. We have specifically applied measures to the design stage of the business process lifecycle, which signifies measuring conceptual models. Measurement in Design and Analysis lifecycle stage has several advantages, principally in that it is a means to avoid the propagation of errors to later stages, in which their detection and correction may be more difficult. We therefore propose certain steps for business process model improvement, based on measurement activities (measurement, evaluation, and redesign). These activities have been applied to a real hospital business process model. The model was modified by following expert opinions and modelling guidelines, thus leading to the attainment of a higher-quality model. Our findings clearly support the practical utility of measurement activities for business process model improvement.

Paper Nr: 38
Title:

TEAM RADAR - Visualizing Team Memories

Authors:

Cong Chen and Kang Zhang

Abstract: In distributed software teams, awareness information is often lost due to communication restrictions. Researchers have attempted to retain team awareness by sharing change information across workspaces. The major challenge is how to convey information to readers effectively while avoiding information overload. In this paper, we address the benefit of delivering fine-grained awareness information, and present a new technique and prototype implementation for its capture and visualization. We also discuss how visual techniques and metaphors could promote user collaboration.

Paper Nr: 44
Title:

MODEL-DRIVEN TESTING - Transformations from Test Models to Test Code

Authors:

Beatriz Pérez Lamancha and Pedro Reales Mateo

Abstract: In MDE, software products are built with successive transformations of models at different abstraction levels, which in the end are translated into executable code for the specific platform where the system will be deployed and executed. As testing is one of the essential activities in software development, researchers have proposed several techniques to deal with testing in model-based contexts. In previous works, we described a framework to automatically derive UML Testing-Profile test cases from UML 2.0 design models. These transformations are made with the QVT language which, like UML 2.0 and UML-TP, is an OMG standard. Now, we have extended the framework for deriving the source code of the test cases from those in the UML Testing Profile. This transformation, which can be applied to obtain test cases in a variety of programming languages, is implemented with MOFScript, which is also an OMG standard. Thus, this paper almost closes our cycle of testing automation in MDE environments, always within the limits of OMG standards. Moreover, thanks to this standardization, the development of new tools is not required.

Paper Nr: 45
Title:

A COMPARATIVE OF GOAL-ORIENTED APPROACHES TO MODELLING REQUIREMENTS FOR COLLABORATIVE SYSTEMS

Authors:

Miguel A. Teruel, Elena Navarro and Víctor López-Jaquero

Abstract: A collaborative system is a software allowing several users to work together and carry out collaboration, communication and coordination tasks. To perform these tasks, the users have to be aware of other user’s actions, usually by means of a set of awareness techniques. However, when these systems have to be specified for development severe difficulties emerge to describe the requirements associated to these special functionalities, usually considered non-functional requirements. Therefore, the selection and use of proper requirements engineering techniques becomes a challenging and important decision. In this paper three Goal-Oriented approaches, namely NFR framework, i* and KAOS, are evaluated in order to determine which one is the most suitable to deal with this problem of requirements specification in collaborative systems.

Paper Nr: 52
Title:

TOWARDS TECHNOLOGY INDEPENDENT STRATEGIES FOR SOA IMPLEMENTATIONS

Authors:

Zheng Li

Abstract: Benefiting from the technology based strategies, Service-Oriented Architecture (SOA) has been able to achieve the general goals such as agility, flexibility, reusability and efficiency. Nevertheless, technical conditions alone cannot guarantee successful SOA implementations. As a valuable and necessary supplement, the space of technology independent strategies should also be explored. Through treating SOA system as an instance of organization and identifying the common ground on the similar process of SOA implementation and organization design, this paper uses existing work in organization theory area to inspire the research into technology independent strategies of SOA implementation. As a result, four preliminary strategies that can be applied to organizational area we identify to support SOA implementations. Furthermore, the novel methodology of investigating technology independent strategies for implementing SOA is revealed, which encourages interdisciplinary research across service-oriented computing and organization theory.

Paper Nr: 55
Title:

CORRECT MATCHING OF COMPONENTS WITH EXTRA-FUNCTIONAL PROPERTIES - A Framework Applicable to a Variety of Component Models

Authors:

Kamil Ježek and Přemek Brada

Abstract: A lot of current approaches attempt to enrich software systems with extra-functional properties. These attempts become remarkably important with the gradual adoption of component-based programming. Typically, extrafunctional properties need to be taken into account in the phase of component binding and therefore included in the process of verifying component compatibility. Although a lot of research has been done, practical usage of extra-functional properties is still rather scarce. The main problem could be in a slow adaptability of specialized research models to rapidly changing industrial needs. We have designed a solution to this issue in the form of a modular framework which provides a formally sound yet practical means to declare, assign and evaluate extra-functional properties in the context of component-based applications. One of its strengths is applicability to a variety of industrial as well as research component models. This paper describes the models and algorithms of the framework and introduces a prototype implementation proving the concept.

Paper Nr: 59
Title:

EFFECT OF NON-WORK RELATED INTERNET USAGE ON STIMULATING EMPLOYEE CREATIVITY IN THE SOFTWARE INDUSTRY

Authors:

Sachitha I. P. Gunawardena and Sanath Jayasena

Abstract: This study investigates the effect of non-work related Internet usage on stimulating employee creativity in the software industry. Drawing from past literature this research proposes six dimensions for measuring creativity stimulation, which include: accessibility to information, intrinsic motivation to execute ideas, curiosity and exploration, independent thinking, collaboration and breaking down technical barriers. A survey was conducted through distribution of a research questionnaire among a stratified random sample of knowledge workers employed in the software industry. The findings of the research were partially consistent with the initial predictions which stated a positive effect of non-work related Internet usage on creativity stimulation. In addition, the research results also provided an exploratory view on the nature of employees’ non-work related Internet usage.

Paper Nr: 68
Title:

A NEW AGILE PROCESS FOR WEB DEVELOPMENT

Authors:

Vinícius Pereira and Antonio Francisco do Prado

Abstract: In this paper is described an agile methodology for Web development based on User Stories. The main objective in this methodology is to have a more real relationship among application code and requirements. Thus, the development team and the user may come to have a greater understanding during the application development process. It is divided in three disciplines, each one refining the User Stories, from requirements specification using the Navigation Model and Story Cards until the execution of these User Stories to guide the coding. The team can also use these Stories as acceptance tests, which represent the user behaviour when using the system. With all this, in the end the development team may have more guarantees that the Web application represents what the user wants.

Paper Nr: 70
Title:

TRANSFORMING ATTRIBUTE AND CLONE-ENABLED FEATURE MODELS INTO CONSTRAINT PROGRAMS OVER FINITE DOMAINS

Authors:

Raúl Mazo and Camille Salinesi

Abstract: Product line models are important artefacts in product line engineering. One of the most popular languages to model the variability of a product line is the feature notation. Since the initial proposal of feature models in 1990, the notation has evolved in different aspects. One of the most important improvements allows specify the number of instances that a feature can have in a particular product. This improvement implies an important increase on the number of variables needed to represent a feature model. Another improvement consists in allowing features to have attributes, which can take values on a different domain than the boolean one. These two extensions have increased the complexity of feature models and therefore have made more difficult the manually or even automated reasoning on feature models. To the best of our knowledge, very few works exist in literature to address this problem. In this paper we show that reasoning on extended feature models is easy and scalable by using constraint programming over integer domains. The aim of the paper is double (a) to show the rules for transforming extended feature models into constraint programs, and (b) to demonstrate, by means of 11 reasoning operations over feature models, the usefulness and benefits of our approach. We evaluated our approach by transforming 60 feature models of sizes up to 2000 features and by comparing it with 2 other approaches available in the literature. The evaluation showed that our approach is correct, useful and scalable to industry size models.

Short Papers
Paper Nr: 10
Title:

SEEDED FAULTS AND THEIR LOCATIONS DESIGN USING BAYES FORMULA AND PROGRAM LOGIC IN SOFTWARE TESTING

Authors:

Wang Lina

Abstract: Focusing on three questions “what faults to seed”, “how to seed faults more effectively” and “how to select the seeded fault locations”, the methods of fault seeding are studied. Aiming at procedural language source code, a fault classification scheme is presented. Referring to Howden’s fault classification scheme, and based on the occurrence causes and manifestations of software faults, a hierarchy of fault classes is designed. The faults are categorized as assignment faults, control flow faults or runtime environment faults. Then they are further classified by degrees, respectively. 96 categories are included in all. According to this classification, a statistical method based on Bayes formula is designed to determine the manifestations of seeded faults. A logical method based on the logical relation between control flow and data flow of program is presented to set seeded locations. And the concrete seeding process is introduced. Finally, the methods are verified by a case.

Paper Nr: 20
Title:

MICROSSB: A LIGHTWEIGHT FRAMEWORK FOR ON-LINE DISTRIBUTED APPLICATION BASED ON SOFT SYSTEM BUS

Authors:

Jian Xiao, Jizhou Sun, Gang Li and Chun Li

Abstract: Software development based on Soft System Bus (SSB) is a novel approach to Software Engineering. From the viewpoint of SSB, this paper presents a lightweight framework for developing on-line distributed applications, called MicroSSB. The framework partly implements the core functions of SSB-based system, including communication channel, data-instruction station, message exchange, security check and dynamic component management etc. The paper also proposes a guideline for using MicroSSB. By using MicroSSB, the designers and developers of distributed applications can focus on the core of their product instead of struggling with the low-level distributed programming. As case studies, the paper also shows two real applications based on MicroSSB: an experimental collaborative decision making system for air traffic flow control and a marine emergency commanding system.

Paper Nr: 30
Title:

INTERACTIVE COMPONENT VISUALIZATION - Visual Representation of Component-based Applications using the ENT Meta-model

Authors:

Jaroslav Šnajberk and Přemek Brada

Abstract: UML is considered to be a universal solution for diagramming any application, but UML also has its shortcomings. It needs several diagrams to describe one problem, it cannot create different views on one diagram and it is not interactive. This leads to hours spent drawing the same thing from different views, any change has to be applied several times and the author of a UML diagram has to balance between good readability and providing a sufficient amount of information. In particular, the UML component diagram has insufficient expressive power to capture all the facts of even today’s component models and architectures. In this paper, we propose a visualization aimed at modular and composed architecture that is content-aware, so it can present the model of component-based architecture in different ways, depending on user needs. By default, it presents minimum information to reduce cognitive load and keep the diagrams comprehensible, while making the additional information available when the user needs it. This paper thus suggests a possible substitute for UML in the domain of component-based applications.

Paper Nr: 35
Title:

SOFTWARE EFFORT ESTIMATION MODEL BASED ON USE CASE SPECIFICATION

Authors:

Xinguang Chen

Abstract: Software effort estimation is essential for the project planning. Use case is widely used to capture and describe the requirements of customers and used as an index of software measurement and estimation. Based on the framework of traditional use case point estimation model, the paper presents UCSE, an effort estimation model based on use case specification. Firstly, the model abstracts factors influencing software effort from the use case specification and calculates the Use Case Weight, which is a kind of measurement of use cases size. Secondly, a function is constructed to translate the software size expressed by Use Case Weight to by software scale whose unit is kilo source line of code (KSLOC). Subsequently, effort estimation model COCOMO II is used to estimate the software effort according to the estimated software size measured by KSLOC. Compared with the traditional Use case point estimation model, UCSE model makes use of more relevant information and is more operable since it provides more concrete and objective references for the analysis and measurement of software effort factors in Use Case. What’s more, the presented case study shows its results are more stable.

Paper Nr: 36
Title:

INTERACTION CENTRIC REQUIREMENTS TRACEABILITY

Authors:

Nitesh Narayan and Yang Li

Abstract: Requirement Traceability provides the ability to follow the life-cycle of a requirement from its evolution till subsequent refinement and use. A key issue that restricts the adaptation of approaches to create and maintain these relationships is the lack of tool support that employs a centralized repository for heterogeneous artifacts. Different artifacts are stored in different repositories and thus traceability links are expensive to maintain. Centralized repository can facilitate capturing the stakeholders interaction, which result in creation and modification of the artifacts and their relationship. These interactions hold the rationale behind changes. In this paper we propose a novel model-based CASE tool UNICASE, which aids in maintaining requirements traceability by incorporating disparate artifacts. Further, the tool facilitates capturing the evolution of requirements invoked by the informal communication in the form of discussion and comments.

Paper Nr: 50
Title:

HOW EFFECTIVE IS MODEL CHECKING IN PRACTICE?

Authors:

TheAnh Do

Abstract: Software and hardware systems are becoming increasingly large, complex, and can change rapidly. Ensuring reliability of these systems can therefore be a problem. Traditional techniques such as testing and simulation are completely infeasible to cope. Model checking offers an alternative, but its use is still limited. We identify the disadvantages of model checking in practical usages and research directions to tackle these. We clearly define the context for each disadvantage and concretely describe difficulties for which verification users may face when applying the model checking technique to verifying certain systems. We also provide a comprehensive picture of research works in this context and emphasize outcomes and shortcomings of each work by means of others’. The paper would be therefore the useful user manual for verification users in practical usages and the helpful guidance for doing research in model checking.

Paper Nr: 51
Title:

A MIDDLEWARE BASED, POLICY DRIVEN ADAPTATION FRAMEWORK TO SIMPLIFY SOFTWARE EVOLUTION

Authors:

N. H. Awang

Abstract: Evolution is said to be one of the main causes of problems for software. Unplanned evolution exposes an organization to high software maintenance cost. Due to these facts, we embark on this research to create a framework for simplifying software evolution. This paper presents a framework, called Middleware-based Policy-driven Adaptation Framework (MiPAF). MiPAF has the aim to control the negative effects of software evolution using the concept of software adaptation, supporting both parameterized and compositional adaptation.MiPAF is implemented using well established foundations, i.e. middleware and web service. These two concepts are well accepted by software developer’s community; therefore the chances of MiPAF to be accepted and used by this community are increased. The adaptation mechanism of MiPAF is driven by XML based policy. To evaluate MiPAF, we implement the framework using C language and run it on Windows platform. An existing unit trust system (UTS) is used for evaluation.

Paper Nr: 54
Title:

EXTENDED METADATA FOR DATA WAREHOUSE SCHEMA

Authors:

N. Parimala and Vinay Gautam

Abstract: We are concerned with providing support for identification of changes to the data warehouse schema. The approach involves, building an extended metadata, E-Metadata, using which we identify changes. In this paper we show the manner in which E-Metadata is built. E-Metadata consists of the technical metadata and an ontology. In the E-Metadata Development Process (EDP), first, the technical metadata is extracted from the metadata of the warehouse schema. In the next stage of ontology development process, the schema terms are extracted from the technical metadata. The data warehouse administrator is asked to provide business terms for the schema terms. We, then search the WordNet for synonyms, hypernyms etc. for these terms. Using this information we build the ontology.

Paper Nr: 67
Title:

HYBRID ZIA AND ITS APPROXIMATED REFINEMENT RELATION

Authors:

Zining Cao and Hui Wang

Abstract: In this paper, we propose a specification model combining interface automata, hybrid automata and Z language, named HZIA. This model can be used to describe temporal properties, hybrid properties, and data properties of hybrid software/hardware components. We also study the approximated refinement relation on HZIAs.
Paper Nr: 75
Title:

TO CONTAIN COST, LET'S NOT OVER BUILD OUR SOFTWARE SOLUTIONS

Authors:

Jie Liu

Abstract: In the software industry, many deployed projects suffered one or more of the following: they had fewer features than planned, they were late on their deployment, or they were over budget. We participated in a project that suffered all of these. More significantly, it overran the budget by at least 400%. Looking back, many wrong decisions were made, such as misjudged users’ expectations and their environments, subscribed over complicated backend architecture, and selected a different programming language that was unable to reuse existing code, etc. In this paper, serving as a case study, we argue that an effective approach to contain the cost of a software project, especially internal software, is to build a system that answer the core requirements with room for improvement, not to build the best system in the market.

Workshop MDA & MDSD

Full Papers
Paper Nr: 3
Title:

Behavior Model Mapping

Authors:

Judith Michael and Heinrich C. Mayr

Abstract: The work presented here is part of a comprehensive project that aims at supporting user centered software development from requirements elicitation to program generation. This paper focuses on transforming validated “precon-ceptual” requirements models into conceptual ones (a UML dialect) which then are input for a program generation engine (OlivaNova). In particular, we discuss a set of rules and their prototypical implementation, that map networks of so-called CooperationTypes (as models of business processes) to state charts. This differs from other studies that mostly deal with transformations or mappings of structure models.

Paper Nr: 4
Title:

On the Use of UML Stereotypes in Creating Higher-order Domain-specific Languages and Tools

Authors:

Edgars Rencis and Janis Barzdins

Abstract: Although many different approaches to building graphical domain-specific lan-guages and tools exist nowadays, no platform can ever be said to be final from the usability point of view. In this paper, we show how we can integrate UML stereotype-like mechanism into a tool-building framework in a very user-friendly way. Having such a higher-order language, a user can create new tools or adjust existing ones operating only with the concepts of the language and knowing nothing about the technical details of the platform.

Paper Nr: 5
Title:

Model-driven Testing Approach for Embedded Systems Specifics Verification based on UML Model Transformation

Authors:

Jurijs Grigorjevs

Abstract: This paper is devoted to a model-driven testing approach for embedded system’s non-functional requirements. The method is based on UML state and sequence diagrams suitable for synchronization, asynchronous behavior and timing constraints presentation. The article discusses principles of model transformation and shows a practical approach of a testing model generation from a system model. The idea of such transformation is to generate test cases focused on specific behavior verification of embedded systems. In the paper described example presents method approbation within timing behavior verification using the UML sequence diagram. Presented example is based on a sequence diagram XMI representation, which firstly is pre-processed and moved into data base structures and then transformation rules are applied to generate the testing model. In the result of such transformation a set of valid and invalid test cases is generat-ed in a form of the UML Testing profile.

Paper Nr: 6
Title:

Backward Requirements Traceability within the Topology-based Model Driven Software Development

Authors:

Erika Asnina, Bernards Gulbis, Janis Osis and Gundars Alksnis

Abstract: The inconsistence between software and specifications leads to unpredictable side effects after change implementation. Impact analysis may be useful here, but manual control of trace links is very expensive. Model Driven Architecture and automated transformations should make the impact analysis easier. The issue is that the impact analysis of changes in software to real world functional units is intuitive. Formalization of specifications of the environment and software functionality as well as their analysis by means of Topological Functioning Model extends possibilities of the impact analysis. This paper demonstrates the establishment of formal trace links to real world functional units and entities from user requirements and analysis artifacts. These links show element interdependence explicitly, and, hence, make the impact analysis more thorough.

Paper Nr: 7
Title:

Knowledge Integration for Domain Modeling

Authors:

Armands Slihte

Abstract: This research integrates artificial intelligence (AI) and system analysis by exploiting ontology, natural language processing (NLP), business use cases and model-driven architecture (MDA) for knowledge engineering and domain modeling. We describe an approach for compounding declarative and procedural knowledge in a way that corresponds to AI and system analysis standards, and is compliant for acquiring a domain model corresponding to MDA standards. We are recognizing the possibility of automatically transforming this knowledge to a Computation Independent Model (CIM) for MDA.

Paper Nr: 8
Title:

Practical Experiments with Code Generation from the UML Class Diagram

Authors:

Janis Sejans and Oksana Nikiforova

Abstract: The paper turns an attention to the problems of code generators in advanced CASE tools from the UML class diagram. Authors give a general introduction to code generator types, describes their structure and principles of operation. Three tools are analyzed within the correspondence to their abilities to generate program code from the UML class diagram. They are two modelling tools, namely, Sparx Enterprise Architect and Visual Paradigm, and the programming environment Microsoft Visual Studio .NET. Program code is generated from different fragments of the UML class diagram in all three tools and the obtained code lines are compared with the expected ones based on the model semantics and syntax of the programming language C#. Authors summarize the results of the practical experiments with code generation by stressing different types of errors in the generated code and make conclusion about the directions of the evolution of code generators in the close future.

Paper Nr: 9
Title:

Several Issues on the Definition of Algorithm for the Layout of the UML Class Diagrams

Authors:

Arhur Galapovs and Oksana Nikiforova

Abstract: System modeling is one of the important tasks to be solved during software development. As more complex software systems become as higher requirements are defined for demonstrative presentation of the system to be developed. To solve this task the main attention is devoted to the transparency of the model elements within the graphical presentation of the system. The paper defines the classification of different types of UML diagrams, which are created during development of the software system. This classification is based on the different combinations of nodes and arcs of the diagram graph. The UML class diagram is selected for deeper analysis to the elements’ layout. Authors offer to use main principles of the genetic algorithm to automate the replacement of the diagram created in the manual way. Current results are quite theoretical yet and authors will continue the research based on the issues defined in this paper.

Paper Nr: 10
Title:

Towards the Refinement of Topological Class Diagram as a Platform Independent Model

Authors:

Uldis Donins, Janis Osis and Armands Slihte

Abstract: In this paper a refinement process of topological class diagram is presented. The refinement process is aimed to lower the abstraction level of the initial topological diagram which is obtained from the topological functioning model. Topological functioning model uses mathematical foundations that holistically represent complete functionality of the problem and application domains. By lowering abstraction level of the topological class diagram, it gets additional information which is needed during the software development and maintenance phases. The refinement process consists of six steps. As a result of applying refinement process, a rich topological class diagram with lower abstraction level is obtained. The refinement process is a part of topological modeling approach and it is shown in the context of laundry business system software development project. By applying topological modeling approach it is possible to enable computation independent model creation in a formal way and to enable transformation from it to the platform independent model.

Short Papers
Paper Nr: 11
Title:

Advancements of the Topological Functioning Model for Model Driven Architecture Approach

Authors:

Armands Slihte, Uldis Donins and Janis Osis

Abstract: This paper describes the advancements of the Topological Functioning Model (TFM) for Model Driven Architecture (MDA) approach. This approach recognizes the computation independent nature of a TFM and suggests it to be used as the Computation Independent Model (CIM) within MDA, thus partially automating system analysis. Since the proposal of this approach, there have been a number of significant improvements, revealing new possibilities for system analysis, domain modeling and system design modeling. These advancements include integrating knowledge engineering with system analysis for domain modeling and acquiring a Topological Class Diagram, thus providing unique features within Platform Specific Model (PSM) for further transformation and code generation.

EAST

Full Papers
Paper Nr: 4
Title:

Validating Search Processes in Systematic Literature Reviews

Authors:

Barbara Kitchenham

Abstract: Context: Systematic Literature Reviews (SLRs) need to employ a search process that is as complete as possible. It has been suggested that an existing set of known papers can be used to help develop an appropriate strategy. However, it is still not clear how to evaluate the completeness of the resulting search process. Aim: We suggest a means of assessing the completeness of a search process by evaluating the search results on an independent set of known papers. Method: We assess the results of a search process developed using a known set of papers by seeing whether it was able to identify papers from a different set of known papers. Results: Using a second set of known papers, we were able to show that a search process, which was based on a first set of known papers, was unlikely to be complete, even though the search process found all the papers in the first known set. Conclusions: When using a set of known papers to develop a search process, keep a “hold-out” sample to evaluate probable completeness.

Paper Nr: 6
Title:

Mutation Selection: Some Could be Better than All

Authors:

Zhiyi Zhang, Dongjiang You and Zhenyu Chen

Abstract: In previous research, many mutation selection techniques have been proposed to reduce the cost of mutation analysis. After a mutant subset is selected, researchers could obtain a test suite which can detect all mutants in the mutant subset. Then they run all mutants over this test suite, and the detection ratio to all mutants is used to evaluate the effectiveness of mutation selection techniques. The higher the ratio is, the better this selection technique is. Obviously, this measurement has a presumption that the set of all mutants is the best to evaluate test cases. However, there is no clearly evidence to support this presumption. So we conducted an experiment to answer the question whether the set of all mutants is the best to evaluate test cases. In this paper, our experiment results show that a subset of mutants may be more similar to faults than all the mutants. Two evaluation metrics were used to measure the similarity – rank and distance. This finding reveals that it may be more appropriate to use a subset rather than all the mutants at hand to evaluate the fault detection capability of test cases.

Paper Nr: 8
Title:

Circumstantial-evidence-based Judgment for Software Effort Estimation

Authors:

Zheng Li

Abstract: Expert judgment for software effort estimation is oriented toward direct evidences that refer to actual effort of similar projects or activities through experts’ experiences. However, the availability of direct evidences implies the requirement of suitable experts together with past data. The circumstantial-evidence-based judgment proposed in this paper focuses on the development experiences deposited in human knowledge, and can then be used to qualitatively estimate implementation effort of different proposals of a new project by rational inference. To demonstrate the process of circumstantial-evidence-based judgment, this paper adopts propositional learning theory based diagnostic reasoning to infer and compare different effort estimates when implementing a Web service composition project with some different techniques and contexts. The exemplar shows our proposed work can help determine effort tradeoff before project implementation. Overall, circumstantial-evidence-based judgment is not an alternative but complementary to expert judgment so as to facilitate and improve software effort estimation.

Paper Nr: 9
Title:

Measuring and Improving IT Service Support Processes: A Case Study

Authors:

Kai Zhou and Beijun Shen

Abstract: With the rapid development of Information Technologies for many organizations and the increasing importance of IT, the focus of IT management has shifted from device-oriented management to service-oriented management. This paper describes the approach and results of measurement and improvement of IT service support processes in Bank of China and Nokia Co. The research applied best practices of Organizational Process Performance Process Area defined in CMMI to IT service support processes, and the following steps were adopted: defining the process models, designing their metrics from goals, data collection, processes evaluation, and the identification and elimination of bottlenecks. Two research questions concerned with the research approach are raised and explored in this paper.

Paper Nr: 10
Title:

Investigating the Benefits of Combining PSP with Agile Software Development

Authors:

Wenrong Yang, Mengjiao Shen and Han Su

Abstract: Agile software development is getting popular due to chaotic and changing environments of modern software projects. But there are cases of industrial teams experiencing failure with agile software development. One of the main reasons may be the inadequate capability of involved team members. The Personal Software Process (PSP) is a plan-based software process intending to improve individual software engineering’s competence. Therefore, the integration of PSP with agile methods will probably help to give full play to the advantages of agile method. This paper aims to summarize the existing evidence of combination of PSP with agile software development, so as to identify the benefits.

Paper Nr: 13
Title:

A Novel Approach to Quantifying the Influence of Software Process on Project Performance

Authors:

Jia-kuan Ma and Xiao-fan Tong

Abstract: Determining the appropriate process to be used is a key ingredient of project management. To this end, understanding the influence of activities on the project performance can facilitate the project management. However, quantifying such a relationship via traditional Multiple Linear Regression method tends to be challenging, for the amount of independent variables (activities in software process) is usually larger than the size of dataset. Aiming at such a problem, in this paper we propose a novel approach. By combing the Dantzig selector and Ordinary Least Squares (OLS) regression method, our approach can derive the regression model in such challenging situations, which further set the theoretical stage for studying the quantitive influences of software process on project performance.

Short Papers
Paper Nr: 5
Title:

Find the Best Greedy Algorithm with Base Choice Experiments for Covering Array Generation

Authors:

Jing Jiang and Changhai Nie

Abstract: A number of greedy algorithms have been conducted for covering array construction, and most of them can be integrated into a framework, and more approaches can be derived from the framework. However, such a framework is affected by many factors, which makes its deployment and optimization very challenging. In order to identify the best configuration, we design Base Choice experiments based on six decisions of the framework to study systematically, providing theoretical and practical guideline for the design and optimization of the greedy algorithms.

Paper Nr: 11
Title:

A Case Study of using WikiWinWin into Bug Negotiation

Authors:

Peng Wan

Abstract: In order to make well-considered solutions to software bugs, it is necessary for stakeholders to negotiate collaboratively. In traditional ways of making bug decisions, it not only lacks efficient tool to support the negotiation process which might lead to inadequate negotiation, but also cannot preserve the negotiation records as the experience for future development. In this paper, the WikiWinWin was applied into industrial bug negotiation as a case study to get better solutions. The comparison would be conducted between solutions after being negotiated and the original solutions to prove the improvement brought by WikiWinWin.

Paper Nr: 12
Title:

APIS - A Web-based System for PSP/TSP

Authors:

Chenyi Zhuang and Jingyi Li

Abstract: A useful supporting tool is very important to the success of PSP/TSP adoption. Based upon our experience, we summarize the requirements of a useful PSP/TSP supporting tool as follows: 1) support all phases of the PSP/TSP completely; 2) record and use historical data easily; 3) support teamwork effectively; and 4) provide team data in real time, enabling the PSP/TSP coach to see the process data at any time. In this paper, we propose a web-based PSP/TSP supporting tool, APIS (Advanced Process Improvement Solution). This tool meets all of the requirements stated above. APIS was deployed in SaaS (Software as a Service) manner, and users can rent software through the Internet. APIS has been used in more than thirty projects, and we have received a lot of positive feedback.