Model-Driven Engineering promotes the migration from code-centric to model-based software development. Systems consist of model collections integrating different concerns and perspectives, while semi-automated model transformations analyse quality attributes and generate executable code combining the information from these. Raising the abstraction level to models requires appropriate management technologies supporting the various software development activities. Among these, model comparison represents one of the most challenging tasks and plays an essential role in various modelling activities. Its hardness led researchers to propose a multitude of approaches adopting different approximation strategies and exploiting specific knowledge of the involved models. In this respect, almost no support is provided for the systematic evaluation of comparison approaches against specific scenarios and modelling practices, namely benchmarks. In this article we propose Benji, a framework for the automated generation of model comparison benchmarks. In particular, by giving a set of difference patterns and an initial model, users can generate model manipulation scenarios resulting from the application of the patterns on the model. The generation support provided by the framework obeys specific design principles that are considered as essential properties for the systematic evaluation of model comparison solutions, and are inherited from the general principles coming from evidence-based software engineering. The framework is validated through representative scenarios of model comparison benchmark generations.
Model comparison is a critical task in model-driven engineering. Its correctness enables an effective management of model evolution, synchronisation, and even other tasks, such as model transformation testing. The literature is rich as concerns comparison algorithms approaches, however the same cannot be said for their systematic evaluation. In this paper we present Benji, a tool for the generation of model comparison benchmarks. In particular, Benji provides domain-specific languages to design experiments in terms of input models and possible manipulations, and based on those generates corresponding benchmark cases. In this way, the experiment specification can be exploited as a systematic way to evaluate available comparison algorithms against the problem under study.
Multimodal journey planners are used worldwide to support travelers in planning and executing their journeys. Generated travel plans usually involve local mobility service providers, consider some travelers' preferences, and provide travelers information about the routes' current status and expected delays. However, those planners cannot fully consider the special situations of individual cities when providing travel planning services. Specifically, authorities of different cities might define customizable regulations or constraints of movements in the cities (e.g., due to construction works or pandemics). Moreover, with the transformation of traditional cities into smart cities, travel planners could leverage advanced monitoring features. Finally, most planners do not consider relevant information impacting travel plans, for instance, information that might be provided by travelers (e.g., a crowded square) or by mobility service providers (e.g., changing the timetable of a bus). To address the aforementioned shortcomings, in this paper, we propose ROUTE, a framework for customizable smart mobility planners that better serve the needs of travelers, local authorities, and mobility service providers in the dynamic ecosystem of smart cities. ROUTE is composed of an architecture, a process, and a prototype developed to validate the feasibility of the framework. Experiments' results show that the framework scales well in both centralized and distributed deployment settings.
One of the essential means of supporting Human-MachineInteraction is a (software) language, exploited to input commands andreceive corresponding outputs in a well-dened manner. In the past,language creation and customization used to be accessible to softwaredevelopers only. But today, as software applications gain more ubiquity,these features tend to be more accessible to application users themselves.However, current language development techniques are still based on tra-ditional concepts of human-machine interaction, i.e. manipulating textand/or diagrams by means of more or less sophisticated keypads (e.g.mouse and keyboard).In this paper we propose to enhance the typical approach for dealing withlanguage intensive applications by widening available human-machine in-teractions to multiple modalities, including sounds, gestures, and theircombination. In particular, we adopt a Multi-Paradigm Modelling ap-proach in which the forms of interaction can be specied by means ofappropriate modelling techniques. The aim is to provide a more advancedhuman-machine interaction support for language intensive applications.
When developing complex software-intensive systems, it is nowadays common practice to base the solution partly on existing software components. Selecting which components to use becomes a critical decision in development, but it is currently not well supported through methods and tools. This paper discusses how a decision support system for this problem could benefit from a software ecosystem approach, where participants share knowledge across organizations both through reuse of analysis models, and through partially disclosed past decision cases. It is shown how the architecture of this ecosystem becomes fundamental to deal with efficient knowledge sharing, while respecting constraints on integrity of intellectual property. A concrete proposal for an architecture is outlined, which is a distributed system-of-systems using web technologies. Experiences of a proof-of-concept implementation are also described.
Selecting sourcing options for software assets and components is an important process that helps companies to gain and keep their competitive advantage. The sourcing options include: in-house, COTS, open source and outsourcing. The objective of this paper is to further refine, extend and validate a solution presented in our previous work. The refinement includes a set of decision-making activities, which are described in the form of a process-line that can be used by decision-makers to build their specific decision-making process. We conducted five case studies in three companies to validate the coverage of the set of decision-making activities. The solution in our previous work was validated in two cases in the first two companies. In the validation, it was observed that no activity in the proposed set was perceived to be missing, although not all activities were conducted and the activities that were conducted were not executed in a specific order. Therefore, the refinement of the solution into a process-line approach increases the flexibility and hence it is better in capturing the differences in the decision-making processes observed in the case studies. The applicability of the process-line was then validated in three case studies in a third company.
The paper presents the AIDOaRt project, a 3 years long H2020-ECSEL European project involving 32 organizations, grouped in clusters from 7 different countries, focusing on AI-augmented automation supporting modeling, coding, testing, monitoring, and continuous development in Cyber-Physical Systems (CPS). To this end, the project proposes to combine Model Driven Engineering principles and techniques with AI-enhanced methods and tools for engineering more trustable and reliable CPSs. This paper introduces the AIDOaRt project, its overall objectives, and used requirement engineering methodology. Based on that, it also focuses on describing the current plan regarding a set of tools intended to cover the model-based capabilities requirements from the project.
Cyber-Physical Systems (CPS) are heterogeneous and require cross-domain expertise to model. The complexity of these systems leads to questions about prevalent modeling approaches, their ability to integrate heterogeneous models, and their relevance to the application domains and stakeholders. The methodology for Multi-Paradigm Modeling (MPM) of CPS is not yet fully established and standardized, and researchers apply existing methods for modeling of complex systems and introducing their own. No systematic review has been previously performed to create an overview of the field on the methods used for MPM of CPS. In this paper, we present a systematic mapping study that determines the models, formalisms, and development processes used over the last decade. Additionally, to determine the knowledge necessary for developing CPS, our review studied the background of actors involved in modeling and authors of surveyed studies. The results of the survey show a tendency to reuse multiple existing formalisms and their associated paradigms, in addition to a tendency towards applying transformations between models. These findings suggest that MPM is becoming a essential approach to model CPS, and highlight the importance of future integration of models, standardization of development process and education.
Gearbox bearing maintenance is one of the major overhaul cost items for railway electric propulsion systems. They are continuously exposed to challenging working conditions, which compromise their performance and reliability. Various maintenance strategies have been introduced over time to improve the operational efficiency of such components, while lowering the cost of their maintenance. One of these is predictive maintenance, which makes use of previous historical data to estimate a component’s remaining useful life (RUL). This paper introduces a machine learning-based method for calculating the RUL of railway gearbox bearings. The method uses unlabeled mechanical vibration signals from gearbox bearings to detect patterns of increased bearing wear and predict the component’s residual life span. We combined a data smoothing method, a change point algorithm to set thresholds, and regression models for prediction. The proposed method has been validated using real-world gearbox data provided by our industrial partner, Alstom Transport AB in Sweden. The results are promising, particularly with respect to the predicted failure time. Our model predicted the failure to occur on day 330, while the gearbox bearing’s actual lifespan was 337 days. The deviation of just 7 days is a significant result, since an earlier RUL prediction value is usually preferable to avoid unexpected failure during operations. Additionally, we plan to further enhance the prediction model by including more data representing failing bearing patterns.
Model-based methods and techniques continuously evolve to meet the increasing challenges of modern-day technical landscapes. Parallel to Model-based methods, other paradigms are similarly maturing and being integrated, and one such paradigm is DevOps. Model-based methods and DevOps are perceived to provide benefits when viewed in isolation. Recently, there has been an increased interest in matching the two paradigms, with various proposals and early adoption results. However, little focus is put on the practitioners' view. In this paper, we propose a methodology that aims to utilise Model-driven engineering and DevOps practices in conjunction. Together with the methodology, we present an early evaluation of it from a practitioner's perspective. In particular, we study a large and long-running student project aiming to build a solar vehicle, by presenting the current integration and potential future directions. In this paper we limit the observation to the development phase. Early feedback from the case study indicates significant benefits for several identified project pain points, and it's expected that more benefits will emerge when more advanced DevOps aspects are integrated with model-based methods, and the project matures. © 2022 Owner/Author.
In general, trains are referred to as environment-friendly transportation means when compared e.g. to cars, busses, or aircraft, being modern trains electrified systems. Unfortunately, the costs due to creation and maintenance of railway infrastructures, notably the overhead lines to power the trains, impose boundaries to their expansion potentials. In this respect, the advances in battery technologies are disclosing new opportunities, like serving partially electrified tracks. In particular, on board batteries can be used as backup energy where overhead lines are not available. In such scenarios, analysing battery requirements and evaluating possible solutions is of critical importance.This paper proposes a model-based systems engineering methodology for evaluating the feasibility of heterogeneous battery systems in the railway domain. The methodology leverages separation of concerns to reduce the complexity of the problem and abstracts the different railway system components by means of corresponding simulation models. The methodology is illustrated through a study performed at an industrial partner; in particular, the paper discusses how simulation models have been conceived, refined, validated, and integrated to analyse the properties of various battery configurations for several passenger trains operating on commercial lines in France. Interestingly, the results demonstrate that heterogeneous battery systems provide a suitable trade-off alternative when compared to homogeneous batteries.
Tasks performed by users in exchange for some reward, also known as quests or challenges, are one of the essential elements found in gamified systems, including systems for behavioral change. These elements can be tailored to specific players, according to their profile features and past performance, in order to deliver a more personalized and motivating experience. However, in order to automatically generate challenges, a formal, generalizable model of the essential building blocks of such game elements and their internal relations is needed. Although some work has been carried out in the past to define quests and challenges, a widely agreed-upon definition is still missing. Such an abstract definition should be employable across different application domains and scenarios and be flexible with respect to implementation details and human factors. In this work, we employ a model-driven approach to (1) propose a formal definition of quests and challenges in gamified systems, focusing on systems for behavioral change in the mobility domain, (2) model quests by means of a Domain-Specific Language implementing the proposed definition, and (3) take the first steps towards automatic rule generation by demonstrating a mapping between our model and Drools syntax compatible with an existing gamification engine. In particular, we illustrate how to ease the implementation of quests and challenges by using an example from an existing gamified system in the sustainable mobility domain.
The advent of complex Cyber–Physical Systems (CPSs) creates the need for more efficient engineering processes. Recently, DevOps promoted the idea of considering a closer continuous integration between system development (including its design) and operational deployment. Despite their use being still currently limited, Artificial Intelligence (AI) techniques are suitable candidates for improving such system engineering activities (cf. AIOps). In this context, AIDOaRT is a large European collaborative project that aims at providing AI-augmented automation capabilities to better support the modeling, coding, testing, monitoring, and continuous development of CPSs. The project proposes to combine Model Driven Engineering principles and techniques with AI-enhanced methods and tools for engineering more trustable CPSs. The resulting framework will (1) enable the dynamic observation and analysis of system data collected at both runtime and design time and (2) provide dedicated AI-augmented solutions that will then be validated in concrete industrial cases. This paper describes the main research objectives and underlying paradigms of the AIDOaRt project. It also introduces the conceptual architecture and proposed approach of the AIDOaRt overall solution. Finally, it reports on the actual project practices and discusses the current results and future plans.
This paper introduces a novel model-driven methodology for the software development of real-time distributed vehicular embedded systems on single- and multi-core platforms. The proposed methodology discloses the opportunity of improving the cost-efficiency of the development process by providing automated support to identify viable design solutions with respect to selected non-functional requirements. To this end, it leverages the interplay of modelling languages for the vehicular domain whose integration is achieved by a suite of model transformations. An instantiation of the methodology is discussed for timing requirements, which are among the most critical ones for vehicular systems. To support the design of temporally correct systems, a cooperation between EAST-ADL and the Rubus Component Model is opportunely built-up by means of model transformations, enabling timing-aware design and model-based timing analysis of the system. The applicability of the methodology is demonstrated as proof of concepts on industrial use cases performed in cooperation with our industrial partners.
In 2014, a new software development approach started to get a foothold: low-code development. Already from its early days, practitioners in software engineering have been showing a rapidly growing interest in low-code development. In 2021 only, the revenue of low-code development technologies reached 13.8 billion USD. Moreover, the business success of low-code development has been sided by a growing interest from the software engineering research community. The model-driven engineering community has shown a particular interest in low-code development due to certain similarities between the two. In this article, we report on the planning, execution, and results of a multi-vocal systematic review on low-code development, with special focus to its relation to model-driven engineering. The review is intended to provide a structured and comprehensive snapshot of low-code development in its peak of inflated expectations technology adoption phase. From an initial set of potentially relevant 720 peer-reviewed publications and 199 grey literature sources, we selected 58 primary studies, which we analysed according to a meticulous data extraction, analysis, and synthesis process. Based on our results, we tend to frame low-code development as a set of methods and/or tools in the context of a broader methodology, often being identified as model-driven engineering.
The adoption of model-driven engineering in the automotive domain resulted in the standardization of a layered architectural description language, namely EAST-ADL, which provides means for enforcing abstraction and separation of concerns, but no support for automation among its abstraction levels. This support is particularly helpful when manual transitions among levels are tedious and error-prone. This is the case of design and implementation levels. Certain fundamental analyses (e.g., timing), which have a significant impact on design decisions, give precise results only if performed on implementation level models, which are currently created manually by the developer. Dealing with complex systems, this task becomes soon overwhelming leading to the creation of a subset of models based on the developers experience; relevant implementation level models may therefore be missed. In this work, we describe means for automation between EAST-ADL design and implementation levels to anticipate end-to-end delay analysis at design level for driving design decisions.
Agile development aims at switching the focus from processes to interactions between stakeholders, from heavy to minimalistic documentation, from contract negotiation and detailed plans to customer collaboration and prompt reaction to changes. With these premises, requirements traceability may appear to be an overly exigent activity, with little or no return-of-investment. However, since testing remains crucial even when going agile, the developers need to identify at a glance what to test and how to test it. That is why, even though requirements traceability has historically faced a firm resistance from the agile community, it can provide several benefits when promoting precise alignment of requirements with testing. This paper reports on our experience in promoting traceability of requirements and testing in the data communications for mission-critical systems in an industrial Scrum project. We define a semi-automated requirements tracing mechanism which coordinates four traceability techniques. We evaluate the solution by applying it to an industrial project aiming at enhancing the existing Virtual Router Redundancy Protocol by adding Simple Network Management Protocol support.
The size, complexity and heterogeneity of vehicular software systems has been constantly increasing. As a result, there is a growing consensus on the need to leverage modelbased techniques for automating, thus taming, error-proneness of tedious engineering tasks. Our methodology employs a one-tomany model transformation for generating a set of implementation models from a single design model. Then, it evaluates the appropriateness of each generated model by means of modelbased timing analysis. In this ongoing work, we discuss an enhancement of our methodology where model-based timing analysis is extended for running on a single model with uncertainty.
Models and model transformations, the two core constituents of Model-Driven Engineering, aid in software development by automating, thus taming, error-proneness of tedious engineering activities. In most cases, the result of these automated activities is an overwhelming amount of information. This is the case of one-to-many model transformations that, e.g. in designspace exploration, can potentially generate a massive amount of candidate models (i.e., solution space) from one single model. In our scenario, from one design model we generate a set of possible implementation models on which timing analysis is run. The aim is to find the best model from a timing perspective. However, multiple implementation models can have equally good analysis results. Therefore, the engineer is expected to investigate the solution space for making a final decision, using criteria which fall outside the analysis criteria themselves. Since candidate models can be many and very similar to each other, manually finding differences and commonalities is an impractical and errorprone task. In order to provide the engineer with an expressive representation of models commonalities and differences, we propose the use of modelling with uncertainty. We achieve this by elevating the solution space to a first-class status, adopting a compact notation capable of representing the solution space by means of a single model with uncertainty. Commonalities and differences are thus represented by means of uncertainty points for the engineer to easily grasp them and consistently make her decision without manually inspecting each model individually.
Component-Based Software Engineering has been recognized as an effective practice for dealing with the increasing complexity of the software for vehicular embedded systems. Despite the advantages it has introduced in terms of reasoning, design and reusability, the software development for vehicular embedded systems is still hampered by constel- lations of different processes, file formats and tools, which often require manual ad hoc translations. By exploiting the crossplay of Component- Based Software Engineering and Model-Driven Engineering, we take ini- tial steps towards the definition of a seamless chain for the structural, functional and execution modeling of software for vehicular embedded systems. To this end, one of the entry requirements is the metamodels definition of all the technologies used along the software development. In this work, we define a metamodel for an industrial component model, Rubus Component Model, used for the software development of vehicular real-time embedded systems by several international companies. We focus on the definition of metamodeling elements representing the software architecture.
We discuss the problem of extracting control and data flows from vehicular distributed embedded systems at higher abstraction levels during their development. Unambiguous extraction of control and data flows is vital part of the end-to-end timing model which is used as input by the end-to end timinganalysis engines. The goal is to support end-to-end timing analysis at higher abstraction levels. In order to address the problem, we propose a two-phase methodology that exploits the principles of ModelDriven Engineering and Component Based Software Engineering. Using this methodology, the software architecture at a higher level is automatically transformed to all legal implementation-level models. The end-to-end timing analysis is performed on each generated implementation-level model and the analysis results are fed back to the design-level model. This activity supports design space exploration, modelrefinement and/or remodeling at higher abstraction levels for tuning the timing behavior of the system.
According to the Model-Driven Engineering paradigm, one of the entry requirements when realising a seamless tool chain for the development of software is the definition of metamodels, to regulate the specification of models, and model transformations, for automating manipulations of models. In this context, we present a metamodel definition for the Rubus Component Model, an industrial solution used for the development of vehicular embedded systems. The metamodel includes the definition of structural elements as well as elements for describing timing information. In order to show how, using Model-Driven Engineering, the integration between different modelling levels can be automated, we present a model-to-model transformation between models conforming to EAST-ADL and models described by means of the Rubus Component Model. To validate our solution, we exploit a set of industrial automotive applications to show the applicability of both the Rubus Component Model metamodel and the model transformation.
There are various methodologies that support the extraction of timing models from EAST-ADL design-level models during the development of vehicular embedded software systems. These timing models are used to predict timing behavior of the systems by performing end-to-end timing analysis. This paper presents, for the first time, a comparative evaluation of three methodologies. We present an evaluation framework that consists of several evaluation features. Using the framework, we compare and evaluate the methodologies against each feature. Eventually, the evaluation results can be used as guidelines for the selection of the most suitable methodology with respect to the end-to-end timing behavior of a given vehicular embedded application.
Software in modern vehicles consists of multi-criticality functions, where a function can be safety-critical with stringent real-time requirements, less critical from the vehicle operation perspective, but still with real-time requirements, or not critical at all. Next-generation autonomous vehicles will require higher computational power to run multi-criticality functions and such a power can only be provided by parallel computing platforms such as multi-core architectures. However, current model-based software development solutions and related modelling languages have not been designed to effectively deal with challenges specific of multi-core, such as core-interdependency and controlled allocation of software to hardware. In this paper, we report on the evolution of the Rubus Component Model for the modelling, analysis, and development of vehicular software systems with multi-criticality for deployment on multi-core platforms. Our goal is to provide a lightweight and technology-preserving transition from model-based software development for single-core to multi-core. This is achieved by evolving the Rubus Component Model to capture explicit concepts for multi-core and parallel hardware and for expressing variable criticality of software functions. The paper illustrates these contributions through an industrial application in the vehicular domain.
The vehicular industry has exploited model-based engineering for design, analysis, and development of single-core vehicular systems. Next generation of autonomous vehicles will require higher computational power, which can only be provided by parallel computing platforms such as multi-core electronic control units. Current model-based software development solutions and related modelling languages, originally conceived for single-core, cannot effectively deal with multi-core specific challenges, such as core-interdependency and allocation of software to hardware. In this paper, we propose an extension to the Rubus Component Model, central to the Rubus model-based approach, for the modelling, analysis, and development of vehicular systems on multi-core. Our goal is to provide a lightweight transition of a model-based software development approach from single-core to multi-core, without disrupting the current technological assets in the vehicular domain.
The term gamification has been introduced in the early 2000s [13] and has as central idea the usage of game elements in non-entertainment application domains to foster motivation [8], [15], [6]. There is a considerable amount of literature concerning gamification concepts [5], [16], related taxonomies [18], [17], and literature reviews [10].
Multimodal journey planners have been introduced with the goal to provide travellers with itineraries involving two or more means of transportation to go from one location to another within a city. Most of them take into account user preferences, their habits and are able to notify travellers with real time traffic information, delays, schedules update, etc.. To make urban mobility more sustainable, the journey planners of the future must include: (1) techniques to generate journey alternatives that take into account not only user preferences and needs but also specific city challenges and local mobility operators resources; (2) agile development approaches to make the update of the models and information used by the journey planners a self-adaptive task; (3) techniques for the continuous journeys monitoring able to understand when a current journey is no longer valid and to propose alternatives. In this paper we present the experiences matured during the development of a complete solution for mobility planning based on model-driven engineering techniques. Mobility challenges, resources and remarks are modelled by corresponding languages, which in turn support the automated derivation of a smart journey planner. By means of the introduced automation, it has been possible to reduce the complexity of encoding journey planning policies and to make journey planners more flexible and responsive with respect to adaptation needs.
Although there are many city journey planners already available in the market and involving various transportation services, there is none yet that allows city mobility operators and local government municipalities to be an active part of the city's mobility. In this demonstrator, we present our first attempt towards multi-view based modelling of adaptive and multimodal city journey planners. In particular, by exploiting Model-Driven Engineering (MDE) techniques, the different stakeholders involved in the city mobility are able to provide their own updated information or promote their own challenges at higher levels of abstraction. Such information is then automatically translated into code-based artefacts that implement/ensure the desired journey planning behaviour, notably to filter travel routes and to make the city mobility more sustainable. The journey planner prototype, implementing the proposed solution, is demonstrated in the context of Trento city mobility. A supporting video illustrating the main features and a demonstration of our solution can be found at: https://youtu.be/KM21WD2dQGs, while the related artefacts and the details on how to create your own prototype are available at the demo GitHub repository, reachable at https://github.com/modelsconf2018/artifact-evaluation/tree/master/bucchiarone.
Gamification, that is the use of gaming elements into non-game contexts, has gained a lot of interests in all those settings where the engagement of target users needs to be stimulated. Education and training have been historically struggling with keeping 'students' motivated to pursue the completion of their learning paths. Lately these issues have been exacerbated by distance education: on the one hand, virtual participation to courses makes education far more accessible than requiring students to seat in the same classroom (and at the same time); on the other hand, the missing 'community building' conveyed by physically attending the same course remarkably reduces students' engagement. In this respect, gamification has been applied as an engagement tool, e.g. in programming courses, by introducing challenges, awards, leader boards, and so forth, with the aim of motivating the students in keeping their efforts for completing their studies.In this paper we describe and compare our experiences in gamification solutions for programming and modelling. In particular, we distinguish some desirable features to have in gamification solutions for modelling courses, and illustrate our experiences in realizing them concretely. Our observations testify that while in principle many of the gamification elements coming from programming courses could be suitable also to engage students in modelling, there exist still remarkable obstacles in realizing them in practice.
Motivational digital systems offer capabilities to engage and motivate end-users to foster behavioral changes towards a common goal. In general these systems use gamification principles in non-games contexts. Over the years, gamification has gained consensus among researchers and practitioners as a tool to motivate people to perform activities with the ultimate goal of promoting behavioural change, or engaging the users to perform activities that can offer relevant benefits but which can be seen as unrewarding and even tedious. There exists a plethora of heterogeneous application scenarios towards reaching the common good that can benefit from gamification. However, an open problem is how to effectively combine multiple motivational campaigns to maximise the degree of participation without exposing the system to counterproductive behaviours. We conceive motivational digital systems as multi-agent systems: self-adaptation is a feature of the overall system, while individual agents may self-adapt in order to leverage other agents' resources, functionalities and capabilities to perform tasks more efficiently and effectively. Consequently, multiple campaigns can be run and adapted to reach common good. At the same time, agents are grouped into micro-communities in which agents contribute with their own social capital and leverage others' capabilities to balance their weaknesses. In this paper we propose our vision on how the principles at the base of the autonomous and multi-agent systems can be exploited to design multi-challenge motivational systems to engage smart communities towards common goals. We present an initial version of a general framework based on the MAPE-K loop and a set of research challenges that characterise our research roadmap for the implementation of our vision.
Over the years, gamification gained consensus among researchers and practitioners as a tool to motivate people to perform activities deemed as tedious or unexciting. Hence, there exist many and heterogeneous application domains that may benefit from gamification. However, the domain expert and the designer are often separate individuals with dissimilar backgrounds, skills, and understanding. Thus, they need a shared language to communicate and to design a gamified system in line with its ultimate goal, the implementation of which can then be left to the developers. While several studies from the literature tackled the problem of formally defining a design language able to assist designers in the code production, they rarely foresee a framework capable to include all the involved stakeholders (e.g., domain experts). Moreover, it is essential to allow those stakeholders to monitor the gameplay at runtime and intervene when necessary, as the design process is intrinsically iterative. In this work, we present a design framework that models the whole life cycle of gamification solutions, from the design to the execution and monitoring of the system. Finally, we present a prototype of the framework implemented in the Education domain.
Gamification refers to approaches that apply gaming elements and mechanics into contexts where gaming is not the main business purpose. Gamification principles have proven to be very effective in motivating target users in keeping their engagement within everyday challenges, including dedication to education, use of public transportation, adoption of healthy habits, and so forth. The spread of gameful applications and the consequent growth of the user base are making their design and development complexity to increase, e.g., due to the need of more and more customized solutions. In this respect, current state-of-the-art development approaches are either too close to programming or completely prepackaged. In the former case, domain and gamification experts are confronted with the abstraction gap between the concepts they would like to use and the corresponding implementation through coding. In the latter situation, customization opportunities are remarkably limited or require again hand-tuning through coding. In both scenarios, programmer tasks are tedious and error-prone, given the intrinsic characteristics of gamified applications, which are sets of rules to be triggered as a consequence of specific events. This chapter illustrates the language engineering endeavor devoted to the creation of the Gamification Design Framework (GDF) through MPS. GDF is conceived by pursuing two main principles: correctness-by-construction and automation. The former aims at providing a language infrastructure conveying consistency between the different aspects of a gameful application in an intrinsic way. The latter aspires to maximize generative features in order to reduce coding needs. As a result, GDF is implemented by means of MPS as a set of three-layered domain-specific languages, where a lower-level language instantiates and extends the concepts defined from the language(s) above. Moreover, GDF is equipped with generators to automatically create gameful application structural components, behaviors, and deployment into a selected gamification engine.
Gamification is increasingly used to build solutions for driving the behaviour of target users' populations. Gameful systems are typically exploited to keep users' involvement in certain activities and/or to modify an initial behaviour through game-like elements, such as awarding points, submitting challenges and/or fostering competition and cooperation with other players. Gamification mechanisms are well-defined and composed of different ingredients that have to be correctly amalgamated together; among these we find single/multi-player challenges targeted to reach a certain goal and providing an adequate award for compensation. Since the current approaches are largely based on hand-coding/tuning, when the game grows in its complexity, keeping track of all the mechanisms and maintaining the implementation can become error-prone and tedious activities. In this paper, we describe a multi-level modelling approach for the definition of gamification mechanisms, from their design to their deployment and runtime adaptation. The approach is implemented by means of JetBrains MPS, a text-based meta-modelling framework, and validated using two gameful systems in the Education and Mobility domains.
Gamification refers to the exploitation of gaming mechanisms for serious purposes, like promoting behavioural changes, soliciting participation and engagement in activities, and so forth. In this demo paper we present the Gamification Design Framework (GDF), a tool for designing gamified applications through model-driven engineering mechanisms. In particular, the framework is based on a set of well-defined modelling layers that start from the definition of the main gamification elements, followed by the specification on how those elements are composed to design games, and then progressively refined to reach concrete game implementation and execution. The layers are interconnected through specialization/generalization relationships such that to realize a multi-level modelling approach. The approach is implemented by means of JetBrains MPS, a language workbench based on projectional editing, and has been validated through two gameful systems in the Education and Mobility domains. A prototype implementation of GDF and related artefacts are available at the demo GitHub repository: https://github.com/antbucc/GDF.git, while an illustrative demo of the framework features and their exploitation for the case studies are shown in the following video: https://youtu.be/wxCe6CTeHXk.
Heterogeneous agents that cooperate to accomplish collective tasks constitute Collective Adaptive Systems (CAS). Engineering a CAS not only involves the definition of the individual agents, but also their roles in achieving a collective task and adaptation strategies to counteract to environmental changes. Current solutions for specifying CAS typically tackle the problem at a low level of abstraction (e.g., writing XML files), making this task time-consuming and error-prone. Moreover, such a low level of abstraction hinders the understandability of the specification. Model-Driven Engineering (MDE) proposes to reduce the complexity of development by adopting models as first class artifacts in the process. In this respect, this work proposes a MDE approach to enhance CAS specification. In particular, we introduce a domain-specific language (DSL) made-up of three main views: one devoted to adaptive systems design; one addressing ensembles definition; and one tackling the collective adaptation. These three separate aspects are woven seamlessly by the DSL to constitute a complete CAS design. While the different views allow us to exploit separation-of-concerns to reduce complexity and focus on a specific aspect of the system, facing CAS specification at a higher-level of abstraction permits to use concepts closer to the experts of the involved domains. Moreover, the precise definition of modeling concepts through corresponding meta-models enables correctness-by-construction of the system specification.
Gamification refers to the employment of gaming mechanisms for non-gaming purposes. Its aim is promoting the engagement of target users in pursuing certain goals, e.g. completing education paths. In this paper we present POLYGLOT, a gamified notebook-like programming environment. The gamification extension was built to target programming languages education, and in this work we illustrate how the approach is adaptable to text-based modelling languages. In particular, we demonstrate the use of gamification tailored to SysML v2 modelling.Each exercise is defined as a sequence of steps framed into notebook cells. On each cell submission, the POLYGLOT extension for.NET interactive runs several analyzers to gain insights of the student code before invoking the gamification engine, which checks if the gathered data fits the teacher-defined expectations. Interestingly, since cell contents are language independent and exercise evaluations are delegated to the gamification engine, this solution enables the creation of heterogeneous narratives, that is gamification scenarios mixing languages in the proposed exercises.
Modeling is an essential and challenging activity in any engineering environment. It implies some hard-to-train skills such as abstraction and communication. Teachers, project leaders, and tool vendors have a hard time teaching or training their students, co-workers, or users. Gamification refers to the exploitation of gaming mechanisms for serious purposes, like promoting behavioral changes, soliciting participation and engagement in activities, etc. We investigate the introduction of gaming mechanisms in modeling tasks with the primary goal of supporting learning/training. The result has been the realization of a gamified modeling environment named PapyGame. In this article, we present the approach adopted for PapyGame implementation, the details on the gamification elements involved, and the derived conceptual architecture required for applying gamification in any modeling environment. Moreover, to demonstrate the benefits of using PapyGame for learning/training modeling, a set of user experience evaluations have been conducted. Correspondingly, we report the obtained results together with a set of future challenges we consider as critical to make gamified modeling a more effective education/training approach.
Gamification refers to the exploitation of gaming mechanisms for serious purposes, like learning hard-to-train skills such as modeling. We present a gamified version of Papyrus, the well-known open source modeling tool. Instructors can use it to easily create new modeling games (including the tasks, solutions, levels, rewards...) to help students learning any specific modeling aspect. The evaluation of the game components is delegated to the GDF gamification framework that bidirectionally communicates with the Papyrus core via API calls. Our gamified Papyrus includes as well a game dashboard component implemented with HTML/CSS/Javascript and displayed thanks to the integration of a web browser embedded in an Eclipse view.
In this paper we propose CAStlE, a MDE approach to enhance Collective Adaptive System (CAS) specification. In particular, we introduce a domain-specific language (DSL) made-up of three main views: one devoted to adaptive systems design; one addressing ensembles definition; and one tackling the collective adaptation. These three separate aspects are woven seamlessly by the DSL to constitute a complete CAS design. Moreover, each of the defined views conveys the creation of a corresponding model editor, which allows for the three aspects of a CAS to be independently designed by CAStlE.
Modeling is an essential and challenging activity in any engineering environment, and it requires some hard-to-train skills such as abstraction and communication. This makes it difficult for educators to teach or train their students, co-workers, or users. The audience of this paper is both educators and learners who struggle with modeling. To address this challenge, we present PapyGame, a gamified version of a robust modeling environment (Papyrus) that aims to improve the learner's motivation, make the learning process an enjoyable experience, and boost learning outcomes. Gamification is the exploitation of gaming mechanisms for serious purposes, such as promoting behavioral changes, soliciting participation and engagement in activities, and more. The paper presents PapyGame's functionalities, architectures, illustrative scenarios, and its potential impact on both educators and learners.
Model-Based Systems Engineering (MBSE) is a growing paradigm for system development where models are the primary considered artefacts. However, MBSE often relies on semi-formal modelling languages and methods, limiting analytical capabilities. Co-Simulation is argued in the literature to be a promising technology in the simulation domain for integrating heterogeneous models in unified simulations. The most commonly used standard for Co-Simulation is currently the Functional-Mockup-Interface (FMI), supported by many tools in the industry. Recently there has been increasing interest in utilizing co-simulation in MBSE processes to enable simulation capabilities earlier in development, mainly via instantiating simulations using the FMI standard from system architecture views. This paper briefly argues the case for co-simulation for industrial MBSE and presents several barriers to integration from a holistic point of view. The paper highlights the need for further research and progress to improve the maturity of the industrial adoption for MBSE workflows while discussing the current outlook for FMI-based co-simulation orchestrated from architecture models.
In Model-Driven Engineering (MDE) design of modelling languages and model transformations is still an expert’s task. Domain experts, i.e. the stakeholders of languages and transformations, would like to independently define and use their own MDE ecosystem, but can only support those activities. In this position paper we discuss this problem from a more general perspective, arguing for the need of emergent MDE, that is modelling languages and transformations shall be inferred from the usage it is done of modelling concepts. Moreover, the paper outlines a possible research agenda with corresponding challenges towards the goal of emergent MDE ecosystems.
Component-based software engineering (CBSE) is based on the fundamental concepts of components and bindings, i.e. units of decomposition and their interconnections. By adopting CBSE, a system is built-up by means of a set of re-usable parts. This entails that systems functionalities are appropriately identified so that implementing components can be accordingly selected. In turn, this means that each component-based design is at least made-up of two different instantiation levels, i) one for designing the system in terms of components and their interconnections, ii) and one for linking possible implementation alternatives for each of the existing components. In general, this twofold instantiation is managed at the same metamodelling level through the use of relationships. Despite such solutions are expressive enough to model a component-based system, they cannot represent the instantiation relationship between, e.g., a component and its implementations. As a consequence, validity checks have to be hard-coded in a tool, while the interconnection between component and implementation have to be managed by the user. In this paper we propose to exploit deep metamodelling techniques for implementing CBSE mechanisms. We revisit CBSE main concepts through this new vision by showing their counter- parts in a deep metamodelling based environment. Interestingly, multiple instantiation levels enhance the expressive power of CBSE approaches, thus enabling a more precise system design.
Software architecture is no more a mere system specification as resulting from the design phase, but it includes the process by which its specification was carried out. In this respect, design decisions in component-based software engineering play an important role: They are used to enhance the quality of the system, keep the current market level, keep partnership relationships, reduce costs, and so forth. For non trivial systems, a recurring situation is the selection of an asset origin, that is if going for in-house, outsourcing, open-source, or COTS, when in the need of a certain missing functionality. Usually, the decision making process follows a case-by-case approach, in which historical information is largely neglected: hence, it is avoided the overhead of keeping detailed documentation about past decisions, but it is hampered consistency among multiple, possibly related, decisions.The ORION project aims at developing a decision support framework in which historical decision information plays a pivotal role: it is used to analyse current decision scenarios, take well-founded decisions, and store the collected data for future exploitation. In this paper, we outline the potentials of such a knowledge repository, including the information it is intended to be stored in it, and when and how to retrieve it within a decision case.
With the increasing adoption of Model-Driven Engineering (MDE) the support of distributed development and hence model versioning has become a necessity. MDE research investigations targeting (meta-)model versioning, conflict management, and model co-evolution have progressively recognized the importance of tackling the problem at higher abstraction level and a number of solving techniques have been proposed. However, in general existing mechanisms hit the wall of semantics, i.e. when not only syntax is involved in the manipulations the chances for providing precision and automation are remarkably reduced. In this paper we illustrate a novel version management proposal that leverages on the separation between linguistic and ontological aspects involved in a (meta-)modelling activity. In particular, we revisit the main versioning tasks in terms of the mentioned separation. The aim is to maximize the amount of versioning problems that can be automatically addressed while leaving the ones intertwined with domain-specific semantics to be solved separately, possibly by means of semi-automatic techniques and additional precision.