A Service Oriented Reflective Wireless Middleware

Bora Yurday (TUBITAK-MAM, ETCBASE Yazılım) , Haluk Gumuskaya(Fatih University)

The role of middleware has become increasingly important in mobile computing, where the integration of different applications and services from different wired and wireless businesses and service providers exist. The requirements and functionalities of the wireless middleware can be achieved by Service Oriented Computing which can be an ideal paradigm for mobile services. Reflective middleware responses are optimized to changing environments and requirements. In this paper a Service Oriented Reflective Wireless Middleware (SORWiM) is proposed. It provides basic (Event, Messaging, Location, and Redirection) and composite services for efficient and reliable in-formation discovery and dissemination in ad hoc mobile environments. One of the primary goals of this research is to investigate how the construction of mobile services can benefit from the Service-Oriented paradigm.

 

 

Procedures of Integration of Fragmented Data in a P2P Data Grid Virtual Repository

Kamil Kuliberda, Jacek Wislicki, Tomasz Kowalski, Radoslaw Adamus, Krzysztof Kacamarski, Kazimierz Subieta

The paper describes dynamic integration mechanism for virtual repository’s distributed resources based on object-oriented databases in grid architecture. The core architecture is based on SBA theory and its virtual updateable views. Our virtual repository transparently process heterogeneous data producing conceptually and semantically coherent results. Our integration apparatus is presented in big shortcut from early sketch idea to fully implemented prototype. We also explain independent virtual p2p network for DB-s and novel data grid’s global index mechanism.

 

 

Towards Facilitating Development of SOA Application with Design Metrics

Wei Zhao, Ying Liu, Jun Zhu, Hui Su (IBM)

With service-oriented architecture (SOA) and the supporting infrastructure, information systems consisting of various of components are intended to be built by showing both a high cohesion as well as a low coupling. However, it is by no means an advisable choice taking only the advanced technology and supporting infrastructure as a panacea. For implementing each SOA application, designs play an important role for the success of the whole project. The services and the composition relationships among them are two critical factors to decide the quality of the SOA application as well as to be a guide for the following development phases as a cost-effective way. In this paper, we present that measurement of designs of SOA applications can facilitate the efficient development of SOA applications with high quality in terms of several specific metrics. We performed an experimental study on an ongoing SOA project. In this study, we employed metrics on the design of this project to acquire judgments and make estimations. The project data in CVS was retrieved to reflect the genuine situation of its implementation, integration and testing. The analysis on these data shows adopting the design measurement in early stage of SOA projects may avoid efforts waste and time delay as well as acquire a deep grasp and effective control on the issues in following phases.

 

 

An Aspect-Oriented Framework for Service Adaptation

Woralak Kongdenfha (University of New South Wales), Regis Saint-Paul (University of New South Wales), Boualem Benatallah (University of New South Wales), Fabio Casati (HP)

Web services are emerging technologies for integrating heterogeneous applications. In application integration, the internal services are interconnected with other external resources to form a virtual enterprise. This puts new requirements on the standardization in terms of external specification, i.e., a combination of service interfaces and business protocols, that interconnected services have to obey. However, previously developed service implementations do not always conform to the standard and require adjustment.

In this paper, we characterize the problem of aligning internal service implementation to a standardized external specification. We propose an Aspect oriented framework as a solution to provide support for service adaptation. In particular, the framework consists of i) a taxonomy of the different possible types of mismatch between external specification and service implementation, ii) a repository of aspect-based templates to automate the task of handling mismatches, and iii) a tool to support template instantiation and their execution together with the service implementation.

 

 

Automated Generation of BPEL Adapters

Antonio Brogi and Razvan Popescu (University of Pisa)

The heterogeneous, dynamic, distributed, and evolving nature of Web services calls for adaptation techniques to overcome various types of mismatches that may occur among services developed by different parties.

In this paper we present a methodology for the automated generation of (service) adapters capable of solving behavioural mismatches among BPEL processes. The adaptation process, given two communicating BPEL processes whose interaction may lock, builds (if possible) a BPEL process which allows the two processes to succesfully interoperate. A key ingredient of the adaptation methodology is the transformation of BPEL processes into YAWL workflows.

 

 

Requirements and Method for Assessment of Service Interoperability

Stanislav Pokraev (Telematica Instituut), Dick Quartel(University of Twente), Maarten W. A. Steen(Telematica Instituut), Manfred Reichert (University of Twente)

Service interoperability is a major obstacle in realizing SOA vision. Interoperability is the capability of multiple, autonomous and heterogeneous systems to use each other’s services effectively. It is about the meaningful sharing of functionality and information that leads to the achievement of a shared goal. In this paper we systematically explain what interoperability means, analyze possible interoperability problems, and define requirements for achieving service interoperability. Our main contributions are the definition of a conceptual framework for service modeling and the provision of a method for formally verifying whether a composite system meets the identified interoperability requirements.

 

 

Coordinated Co-allocator Model for Data Grid in multi sender Environment

Bhuvaneswaran RS, Yoshiaki Katayama, Naohisa Takahashi (Nagoya Institute of Technology)

We propose a model which simultaneously allocates a data block request to the multiple sites, termed as co-allocation, to enable parallel data transfer in a grid environment. The model comprises of co-allcator and the monitor, both of them naturally blended with feedback loop. The co-allocator scheme adapts well to the highly inconsistent network performances of the sites concerned. The scheme initially obtains the bandwidth parameter from the monitor module to fix the partition size and the data transfer tasks are allocated onto the servers in duplication. The scheme is found to be tolerant despite the situation that the link to servers under consideration is broken or become idle. It selects the server which replied to the previous requests without fail in optimal response time. We used Globus toolkit for our framework and utilized the partial copy feature of GridFTP. We compared our schemes with the existing schemes and the results show notable improvement in overall completion time of data transfer.

 

 

DECO: Data replication and Execution CO-scheduling for Utility Grids

Vikas Agarwal, Gargi Dasgupta, Koustuv Dasgupta, Amit Purohit, Balaji Viswanathan (IBM)

Vendor strategies to standardize grid computing as the IT backbone for service-oriented architectures have created business opportunities to offer grid as a utility service for compute- and data-intensive applications. With this shift in focus, there is a need to incorporate agreements that represent the QoS expectations of customer applications and prices they are willing to pay. To this end, we introduce a revenue model for utility grids, where each job is associated with a function that captures the reward accrued by servicing it within a specified deadline, and the penalty incurred on failing to do so. Scheduled execution of jobs on appropriate sites, along with timely transfer (replication) of data closer to compute sites, collectively work towards meeting these deadlines. However, traditional solutions have decoupled the execution of jobs from data transfer decisions. In this paper, we present DECO, a grid meta-scheduler that tightly integrates the data and compute dependencies of each job. It enables differentiated QoS, where profitable jobs are not only assigned to more efficient sites, but the datasets associated with them are also transferred at a higher priority. Experimental studies demonstrate that DECO earns significantly better revenue for the grid provider, when compared to existing scheduling methodologies.

 

 

Division of Labor: Tools for Growth and Scalability of Grids

T. Freeman, K. Keahey (Argonne National Laboratory), B. Sotomayor (Argonne National Laboratory), I. Foster (Argonne National Laboratory), A. Rana(University of California, San Diego), F. Wuerthwein (University of California, San Diego)

To enable Grid scalability and growth, a usage model has evolved where resource providers make resources available not to individual users directly, but rather to larger units, called virtual organizations (VOs). Thus, the resource provider can focus on the dynamics of providing resources to the VOs while VOs specialize to provide resources to their users. Achieving such division of labor requires tools and mechanisms that would allow a resource provider to reliably delegate the usage of a specific resource quantum to a virtual organization in an application-independent way. In this paper, we describe the virtual workspace abstraction and argue that it provides mechanisms that enable the resource provider to perform this delegation of resources. We further describe the Workspace Service – a Grid service based on the Web Service Resource Framework (WSRF) exposing interfaces for remote provisioning and management of such workspaces. We describe the features and behavior of a workspace implementation that, combining the Xen hypervisor and Linux networking tools, allows for allocation and enforcement of resources at a finer-grained level than similar tools. Furthermore, we use this implementation to demonstrate how workspaces can be used to allocate resources to VO-specific infrastructure services called Edge Services.

 

 

Adaptive Preference Specifications for Application Sessions

Christine Julien (University of Texas at Austin)

In ubiquitous computing applications, it is essential that mobile participants be empowered to opportunistically connect to physical and software services available in their local environment. Our previous work has elucidated a model for allowing applications to specify the functional properties of the services to which they need to connect for proper operation. Our framework then connects applications to dynamic resources through the use of a novel suite of application sessions. In this paper, we revisit this framework to devise a mechanism for applications to specify preferences for one service provider over another. In this investigation, we argue that these preferences are actually provided by a set of stakeholders that participate in the session: the application itself, the service provider, and, more surprisingly, the network that connects the application and the provider. We develop a framework for each of these parties to specify allowable connections and preferences among various allowable connections. We demonstrate not only what kinds of properties can be expressed in our framework but also implementation paths for integrating them into the communication and application support infrastructure.

 

 

Discovering Web and JXTA Peer-to-Peer Services in a Unified Manner

Michael Pantazoglou, Aphrodite Tsalgatidou, George Athanasopoulos (National & Kapodistrian University of Athens)

Web services constitute the most prevailing instantiation of the service-oriented computing paradigm and are gradually making their way into the business world. Recently however, representatives of other computing technologies, such as peer-to-peer (p2p), have also adopted the service-oriented approach and expose functionality as services. Thus the service-oriented community would be greatly benefited, if these heterogeneous services could be integrated and composed to build distributed applications. A key towards achieving this integration is the establishment of a unified approach in service discovery. In this paper, we describe some features of a unified service search engine which is used to discover web and p2p services in a unified manner. More specifically, we exemplify how our unified approach is applied in the case of web and p2p service discovery against UDDI and JXTA, respectively. Additionally, we demonstrate how its flexible design enables our service search engine to process heterogeneous service advertisements and thus to exploit the advertised syntactic, semantic, and quality-of-service properties during matchmaking.

 

 

Mobile Ad Hoc Services: Semantic Service Discovery in Mobile Ad Hoc Networks

Andronikos Nedos (Trinity College Dublin)

Mobile ad hoc networks (MANETs) are a class of networks where autonomous mobile devices with wireless communication capabilities cooperate to provide spontaneous, multi-hop connectivity. The opportunistic and dynamic characteristics of these networks make discovery of services difficult as they preclude the use of agreed, predefined service interfaces. Using semantic services and permitting their description with multiple domain ontologies is more realistic in this environment because it increases service expressiveness and does not require consensus on a common representation. However, the techniques used in resource-rich, globally connected environments to relate different ontologies and discover semantic services are inappropriate in MANETs. We present here a model for semantic service discovery that facilitates distributed ontology matching and provides scalable discovery of service provider nodes. It uses a gossip protocol to randomly disseminate ontology concepts and a random walk mechanism to identify candidate providers. The model requires no central coordination and its use of randomisation gives good scalability properties.

 

 

A Hierarchical Framework for Composing Nested Web Processes

Haibo Zhao, Prashant Doshi (The University of Georgia)

Many of the previous methods for composing Web processes utilize either classical planning techniques such as forward or backward search, hierarchical task networks (HTNs), or decision-theoretic planners such as Markov decision processes (MDPs). While offering a way to automatically compose a desired Web process, these techniques do not scale to large processes containing several Web services. In addition, classical planners assume away the uncertainties involved in service invocations such as service failure. In this paper, we present a hierarchical approach for composing Web processes that may be nested – some of the components of the process may be Web processes themselves. We model the composition problem using a semi-Markov decision process (SMDP) that generalizes MDPs by allowing actions to be temporally extended. We use these actions to represent the invocation of lower level Web processes whose execution times are different from simple service invocations. We compare our approach to another hierarchical planner, HTN, and show that our method performs favorably in terms of cost effectiveness and robustness to uncertainty. By exploiting the hierarchy often found in Web processes, our approach is scalable to larger problems.

 

 

Design of Quality-based Composite Services

F. De Paoli, G. Lulli, A. Maurino (Universita di Milano Bicocca)

One of the key factor for the success of SOC-based systems is the capability of assuring the achievement of given Quality of Services. The knowledge and the enforcement of Quality of Services allows for the definition of agreements that are the basis for any business process. In this paper, we discuss an hybrid method for the evaluation of qualities associated with services. Such a method is based on ontologies that state the relations between the involved elements and provide for the computing methods. The method is part of a design methodology to address quality issues along the service life-cycle. A case study in the e-placement field is presented to illustrate a practical use of the approach.

 

 

Service composition (re)binding driven by application-specific QoS

Gerardo Canfora, Massimiliano Di Penta, Raffaele Esposito, Francesco Perfetto, Maria Luisa Villani (University of Sannio)

QoS--aware service composition and binding are among the most challenging and promising issues for service-oriented architectures. The aim of QoS-aware service composition is to determine the set of services that, once composed, will perform the required functionality, and will best contribute to achieve the level of QoS promised in Service Level Agreements (SLAs).

While the existing works focus on cross--domain QoS attributes, it would be relevant to support service composition and binding according to some characteristics on the borderline between functional and non-functional attributes, often proper of the service domain.

This paper presents an approach to deal with application-specific QoS attributes. The paper presents a tool that supports the definition of the attribute types and the way attributes aggregate over a composition. Finally, the paper describes a QoS evaluator that, integrated with our previously developed binder, allows the use of application specific QoS attributes for composite service binding and rebinding. The application of the approach is shown through a case study related to the image processing domain.

 

 

Using Dynamic Asynchronous Aggregate search for Quality guarantees of multiple Web services Compositions

Xuan Thang Nguyen, Ryszard Kowalczyk, and Jun Han (Swinburne University of Technology)

With the increasing impact and popularity of Web service technologies in today's World Wide Web, composition of Web services has received much interests to support enterprise-to-enterprise application integrations. As for service providers and their partners, the Quality of service (QoS) offered by a composite Web service is important. QoS guarantee for composite services has been investigated in a number of works. However, those works consider only an invidiual composition or take the viewpoint of a single provider. In this paper, we focus on the problem of QoS guarantees for multiple inter-related compositions and consider the global viewpoints of all providers engaged in the compositions. The contributions of this paper are two folds. We first formalize the problem of QoS guarantees for multi-compositions and show that it can be modelled as a Distributed Constraint Satisfaction Problem (DisCSP). We also take into account the dynamic nature of the Web service environment of which compositions may be formed or dissolved any time. Secondly, we present a dynamic DisCSP algorithm to solve the problem and discuss our initial experiment to show the feasibility of our approach for multiple Web service compositions with QoS guarantees.

 

 

A Self-Healing Web Server Framework Using Differentiated Services

Henri Q. Naccache Gerald C. Gannod, and Kevin A. Gary (Arizona State University),

Web-based portals are a convenient and effective mechanism for integrating information from a wide variety of sources, including Web services. However, since availability and performance of Web services cannot be guaranteed, availability of information and overall performance of a portal can vary. In this paper, we describe a framework for developing an autonomic self-healing portal systems that relies on the notion of differentiated services (e.g., services that

provide common behavior with variable quality of service) in order to survive unexpected traffic loads and slowdowns in underlying web services. We also present a theoretical performance model that predicts the impact of the framework on existing systems. We demonstrate the framework with an example and provide an evaluation of the technique.

 

 

Adaptive Web Processes Using Value of Changed Information

John Harney, Prashant Doshi (University of Georgia)

Web process composition is recently receiving much attention as an important problem for the Web services community. Many methods, based on planning techniques, have been proposed for building these compositions. Many of these methods are based on pre-defined models of the process environment. However, these methods assume that the models remain static and accurate throughout the life cycle of the Web process. Realistic environments are volatile where the characteristics of the service providers may change over time.Thus, the

Web process may become suboptimal if the model is not updated with the changes. We present an approach that intelligently adapts the models to changes in the process environment.Using our expectation of the possible changes, we provide a mechanism to compute the value of the changed information that guides our decision about which service provider to query for new information. We demonstrate that this method performs better in terms of cost, when compared to a method that always queries all service providers for new information and a method that chooses not to adapt the model at all.

 

 

AMPol-Q: Adaptive Middleware Policy to Support QoS

Raja Afandi, Jianqing Zhang, and Carl A. Gunter (University of Illinois Urbana-Champaign)

There are many problems hindering the design and development of Service-Oriented Architectures (SOAs) that can dynamically discover and compose multiple services so that the quality of the composite service is measured by its End-to-End (E2E) quality, rather than that of individual services in isolation. The diversity and complexity of QoS constraints further limit the wide scale adoption of QoS-aware SOA. We propose extensions to current OWL-S service description mechanisms to describe QoS information of all the candidate services. Our solution is a middleware called AMPol-Q that enables clients to discover, select, compose, and monitor services that fulfill E2E QoS constraints. Our implementation and case studies demonstrate how AMPol-Q can accomplish these goals for web services that implement messaging.

 

 

SCENE: a service composition execution environment supporting dynamic changes disciplined through rules

Massimiliano Colombo, Elisabetta Di Nitto, Marco Mauri (CEFRIEL)

Service compositions are created by exploiting existing component services that are, in general, out of the control of the composition developer. The challenge nowadays is to make such compositions able to dynamically reconfigure themselves in order to address the cases when the component services do not behave as expected and when the execution context changes.

We argue that the problem has to be tackled at two levels: on the one side, the runtime platform should be flexible enough to support the selection of alternative services, the negotiation of their service level agreements, and the partial replanning of a composition. On the other side, the language used to develop the composition should support the designer in defining the constraints and conditions that will regulate such selection, negotiation, and replanning actions at runtime. In this paper we present the SCENE platform that addresses the above issues by offering a language for composition design that extends the standard BPEL language with rules used to guide the execution of self-reconfiguration operations.

 

 

A Model-Based Framework for Developing and Deploying Data Aggregation Workflows

Ramakrishna Soma  (USC), Amol Bakshi (USC), Viktor K. Prasanna (USC), Will Da Sie (Chevron Corporation)

Data aggregation services compose, transform, and analyze data from a variety of sources such as simulators, real-time sensor feeds, etc. This paper proposes a methodology for accelerating the development and deployment of data aggregation modules in a service-oriented architecture. Our framework allows existing semantic web-service techniques to be embedded into a programming language thereby leveraging ease of use and flexibility enabled by the former with the expressiveness and tool support of the latter. In our framework data aggregations are written as regular Java programs where the data inputs to the aggregations are specified as predicates over a rich ontology. Our middleware matches these data specifications to the appropriate web-service, automatically invokes it, and performs the required data serialization-deserialization. Finally the data aggregation program is automatically deployed as yet another web-service. Thus, our programming framework hides the complexity of web-service development from the end-user. We discuss the design and implementation of the framework based on open standards, and using state-of-art tools.

 

 

QED: Quality of Service Enabled Databases

Stefan Krompass, Daniel Gmach, Andreas Scholz, Stefan Seltzsam, Alfons Kemper (TU M¨unchen)

In today's enterprise service oriented software architectures, database systems are a crucial component for the quality of service (QoS) management between customer and service provider. The database workload consists of requests stemming from many different service classes, each of which having a dedicated service level agreement (SLA). We present an adaptive QoS management that is based on an economic model which adaptively penalizes individual requests depending on the SLA and the current degree of SLA conformance that the particular service class exhibits. For deriving the adaptive penalty of individual requests, our model differentiates between opportunity costs for underachieving an SLA threshold and marginal gains for (re-)achieving an SLA threshold. Based on the penalties, we develop a database component which schedules requests depending on their deadline and their associated penalty. We report experiments of our operational system to demonstrate the effectiveness of the adaptive QoS management.

 

 

A Distributed Approach for the Federation of Heterogeneous Registries

Luciano Baresi and Matteo Miraz (Politecnico di Milano)

Registries play a key role in service-oriented systems. Originally, the registry was a neutral player between the provider of a service and its clients. Its role was to enable the publication and discovery of available services. The UDDI Business Registry (UBR) was meant to foster these concepts and provide a common and neutral reference for companies interested in Web services. The more Web services were used, the more companies started create their own ``local'' registries: more efficient discovery processes, better control over the quality of published information, and also more sophisticated publication policies motivated the creation of private repositories. The number and heterogeneity of the different registries ---besides the decision to close the UBR--- motivate the need for new and sophisticated means to make different registries cooperate.

This paper proposes a novel approach based on a publish and subscribe (P/S) infrastructure to federate different heterogeneous registries and make them exchange information about published services. The P/S infrastructure allows clients to abstract the whole federation as if it were a single logical registry, allows clients to subscribe to dedicated services, and supports the dynamic compositions of federations. The paper discusses the main motivations for the P/S-based infrastructure for the integration of heterogeneous registries, proposes an integrated information model, introduces the main components of the framework, and exemplifies them on a simple case study.

 

 

I-Queue: Smart Queues for Service Management

Mohamed Mansour (Georgia Tech), Karsten Schwan (Georgia Tech), Sameh Abdelaziz (Worldspan, L.P.)

The proliferation of the Internet has led to a significant increase in the volume of business being conducted online. Such business systems are characterized by complex underlying software structures and constantly evolving feature sets. In addition, there are changes in the data on which such systems operate to reflect the current state of the business, where the frequency of such changes can be multiple times per day. The dynamic nature of these applications and systems poses substantial challenges to their use and management, suggesting the need for automated solutions. In this paper, we present techniques and their middleware implementation for automatically managing requests streams directed at a large server application, the goal being to improve application reliability in face of evolving feature sets and business data. These techniques (1) automatically detect input patterns that lead to performance degradation or failures and then (2) use these detections to trigger application specific methods that control input patterns to avoid or at least, defer such undesirable phenomena. Lab experiments using actual traces from industrial partners show a 16% decrease in frequency of server restarts when using these techniques, at negligible costs in additional overheads and within delays suitable for the rates of changes experienced by this application.

 

 

A Service Oriented Architecture Supporting Interoperability for Payments Card Processing Systems

Joseph M. Bugajski (Visa International), Robert L. Grossman (Open Data Group) and Steve Vejcik (Open Data Group)

As the size of an organization grows, so does the tension between a centralized system for the management of data, metadata, derived data, and business intelligence and a distributed system. With a centralized system, it is easier to maintain the consistency, accuracy, and timeliness of data. On the other hand with a distributed system, different units and divisions can more easily customize systems and more quickly introduce new products and services. By data interoperability, we mean the ability of a distributed organization to work with distributed data consistently, accurately and in a timely fashion. In this paper, we introduce a service oriented approach to analytics and describe how this is used to measure and to monitor data interoperability.

 

 

Dynamic Service Oriented Architectures through Semantic Technology

Suzette Stoutenburg, Leo Obrst, Deborah Nichols, Ken Samuel, Paul Franklin (The MITRE Corporation)

The behavior of Department of Defense (DoD) Command and Control (C2) services is typically embedded in executable code, providing static functionality that is difficult to change. As the complexity and tempo of the world increase, C2 systems must move to a new paradigm that supports the ability to dynamically modify service behavior in complex, changing environments. Separation of service behavior from executable code provides the foundation for dynamic system behavior and agile response to real time events. In this paper we show how semantic rule technology can be applied to express service behavior in data, thus enabling a dynamic service oriented architecture.

 

 

Services-Oriented Computing in a Ubiquitous Computing Platform

Ji Hyun Kim, WonIl Lee, Jonathan Munson, Young Ju Tak (IBM)

Current telematics services platforms are tightly integrated, relatively fixed-function systems that manage the entire end-to-end infrastructure of devices, wireless communications, device management, subscriber management, and other functions. This closed nature prevents the infrastructure from being shared by other applications, which inhibits the development of new ubiquitous computing services that may not in themselves justify the cost of an entire end-to-end infrastructure. Services-oriented computing offers means to better expose the value of such infrastructures. We have developed a services-oriented, specification-based, ubiquitous computing platform called TOPAZ that abstracts common ubiquitous computing functions and makes them accessible to any application provider through Web-service interfaces. The nature of the TOPAZ services, as enabling long-running sessions between applications and remote clients, presents peculiar challenges to the generic issues of service metering, resource management, and application development. In this paper we describe these challenges and discuss the approach we have taken to them in TOPAZ. We first motivate and describe the TOPAZ application model and its service set. We then describe TOPAZ’s resource management and service metering functions, and its three-party session model that forms the basis for them.

 

 

Optimized Web Services Security Performance with Differential Parsing

Masayoshi Teraguchi, Satoshi Makino, Ken Ueno, Hyen-Vui Chung (IBM)

The focus of this paper is to exploit a differential technique based on the similarities among the byte sequences of the processed SOAP messages in order to improve the performance of the XML processing in the Web Service Security (WS-Security) processing. The WS-Security standard is a comprehensive and complex specification, and requires extensive XML processing that is one of the biggest overheads in WS-Security processing. This paper represents a novel WS-Security processing architecture with differential parsing. The architecture divides the byte sequence of a SOAP message into the parts according to the XML syntax of the message and stores them in an automaton efficiently in order to skip unnecessary XML processing. The architecture also provides a complete WS-Security data model so that we can support practical and complex scenarios. A performance study shows that our new architecture can reduce memory usage and improve performance of the XML processing in the WS-Security processing when the asymmetric signature and encryption algorithms are used.

 

 

Optimizing Differential XML Processing by Leveraging Schema and Statistics

Toyotaro Suzumura, Satoshi Makino, and Naohiko Uramoto  (IBM)

XML fills a critical role in many software infrastructures such as SOA (Service-Oriented Architecture), Web Services, and Grid Computing. In this paper, we propose a high performance XML parser used as a fundamental component to increase the viability of such infrastructures even for mission critical business applications. We previously proposed an XML parser based on the notion of differential processing under the hypothesis that XML documents are similar to each other, and in this paper we enhance this approach to achieve higher performance by leveraging static information as well as dynamic information. We use XML schema languages for the static information that is used for optimizing the inside state transitions. Meanwhile statistics for a set of instance documents are used as static information. These two approaches can be used in complementary ways.

 

 

Web Browsers as Service-Oriented Clients integrated with Web Services

Hisashi Miyashita and Tatsuya Ishihara (IBM)

Web browsers are becoming important application clients in SOAs (Service Oriented Architectures) because more and more Web applications are built from multiple Web Services. Therefore incorporating Web Services into Web browsers is of great interest. However, the existing Web Service frameworks bring significant complexities to traditional Web applications based on DHTML since such Web Service frameworks use RPC (Remote Procedure Call) or message passing model while DHTML is based on a document-centric model. Therefore Web application developers have to bridge the gap between these two models such as Object/XML impedance mismatch.

In our novel approach, in order to request Web Services, the application programs manipulate documents with uniform document APIs without invoking service-specific APIs and without mapping between objects and XML documents. The Web Service framework automatically updates the document by exchanging SOAP messages with the servers.

We show that in our new framework, WebDrasil, we can request a service with only one XPath expression, and then get the response using DOM (Document Object Model) APIs, an approach which is efficient and easily understood by typical Web developers.

 

 

A user driven policy selection model

Mariagrazia Fugini, Pierluigi Plebani, Filippo Ramoni (Politecnico di Milano)

A policy states the non-functional aspects of the Web service which such a policy is associated to. During the Web service discovery phase, even the policy selection step must occurs to retrieve not only the Web services able to perform the required functionalities, but also Web services able to ensure a given quality level.

Both service providers and service users define policies. The former lists the supported service features and uses a more technical vocabulary (e.g., bandwidth, latency) whereas the latter identifies the requirements using higher level vocabulary closer to human users (e.g., speed).

Aim of this paper is twofold. On one hand, it introduces a model for expressing quality according to both applications and human users perspectives. Such a model, compliant with the WS-Policy framework, not only mediates between the application and human user perspectives, but it is also capable of considering the different importance that the user can assign to a quality dimension. On the other hand, the paper introduces a policy selection model based on the adopted quality model. So a human user express its requirements according to an high level language and its requirements are matched against to a lower level service quality specification.

 

 

Abstract Transaction Construct: Building a Transaction Framework for Contract-Driven, Service-Oriented Business Process

Ting Wang, Paul Grefen, Jochem Vonk (Eindhoven University of Technology)

Transaction support is vital for reliability of business processes which nowadays can involve dynamically composed services across organizational boundaries. However, no single transaction model is comprehensive enough to accommodate various transactional properties demanded by those processes. Therefore we develop the Business Transaction Framework, which is based on Abstract Transactional Constructs (ATCs). ATCs are abstract types of existing transaction models that can be composed and executed in a service-oriented transaction framework according to the ATC algebra. By selecting and composing ATCs on demand, flexible and reliable process execution is guaranteed.

 

 

Securing Web Service Composition: Formalizing Authorization policies using Event Calculus

Rouached Mohsen, Godart Claude (LORIA-INRIA)

Service composition is a fundamental technique for developing Web services based applications. In general, one single service cannot achieve the user’s goal, while several services coming from different providers can be composed dynamically to satisfy this objective. As autonomous services are invoked through protocols, issues such as security must be taken into account. Thus, ensuring security in such a system is challenging and not supported by most of the security frameworks proposed in current literature. Indeed, the distributed nature of service-oriented interactions (several parties, no centralized middleware, several security models) renders the support of security more complex than any other context.

This paper presents a formal model for composing security policies dynamically to cope with changes in requirements or occurrences of events. We address one particular issue - that of authorization within a Web services composition. In particular, we propose a dynamic authorization model which allows for complex authorization policies whilst ensuring trust and privacy between the components services.

 

 

Supporting QoS Monitoring in Virtual Organisations

Patrick Stockreisser, Jianhua Shao, W. Alex Gray, Nick J. Fiddian (Cardi University)

It is widely considered the case that some service providers may wish to team up, at some point in time, to form an alliance or a virtual organisation (VO), in order to respond to or exploit a particular market opportunity. The ability to create such VOs on demand in a dynamic, open and competitive environment is one of the challenges that underlie the Grid concept and research. However, to form and manage a VO effectively, issues concerning the monitoring of quality of service (QoS) must be considered. While a range of methods have been proposed for managing various aspects of QoS in service oriented computing environments, existing effort tends to adopt a provider-centric perspective, aiming largely at optimizing and guaranteeing QoS for service  delivery. In this paper, we consider QoS monitoring from a service user's perspective in the context of VO. We describe an approach to supporting a VO manager in monitoring the QoS offered by its members, where monitoring requirements are expressed as queries in a simple language and processed using a data stream based approach.

 

 

Event Based Service Coordination over Dynamic and Heterogeneous Networks

Gianluigi Ferrari, Roberto Guanciale, Daniele Strollo (Istituto Alti Studi IMT)

This paper describes the design of a programming middleware for coordinating services distributed over dynamic and heterogeneous networks, where service addresses are not always public available.

The goal of this paper is to develop a network programming model for such scenario, addressing interoperability while avoiding centralization. We reach our goal through the introduction of a mechanism for the dynamic publication of the same service to several intermediate gateways.

In this paper, we illustrate the problems posed by relaxing the public addressing schema in the context of service orchestration. We discuss the design choices of our middleware. Then we discuss the actual network technologies underlying the prototype implementation and the formal foundations that drive our approach.

 

 

Implicit vs. Explicit Data-Flow Requirements in Web Service Composition Goals

Annapaola Marconi (ITC-irst), Marco Pistore (University of Trento), Paolo Traverso (ITC-irst)

While several approaches to the automated composition of Web services have been proposed in the literature, so far very few of them take into account a key aspect of the composition problem: how to specify data-flow requirements , i.e. requirements on data that are exchanged among component services.

In this paper we present two different ways to specify composition requirements on exchanged data. Implicit data-flow requirements are a set of rules that specify how the functions computed by the component services are to be combined by the composite service. They implicitly define the required constraints among exchanged data. Explicit data-flow requirements are a set of explicit specifications on how the composition should manipulate messages and route them from/to components. We integrate the two specification languages within a state of the art framework for the automated synthesis of compositions. We compare them through an experimental evaluation, both from the point of view of efficiency and scalability as well as from that of practical usability.

 

 

Light-Weight Semantic Service Annotations through Tagging

Harald Meyer, Mathias Weske (University of Potsdam)

Discovering and composing service is a core functionality of a service-oriented software system. Semantic web services promise to support and (partially) automate these tasks. But creating semantic service specification is a difficult, time-consuming, and error prone task which is typically performed by service engineers.

In this paper we present a community-based approach to the creation of semantic service specifications. Inspired by concepts from emergent semantics and folksonomies, we introduce semantic service specifications with restricted expressiveness. Instead of capturing service functionality through preconditions and effects, services are tagged with categories. An example illustrates the ease of use of our approach in comparison to existing approaches.

 

 

Service-Oriented Model-Driven Development: Filling the Extra-Functional Property Gap

Guadalupe Ortiz, Juan Hernández (University of Extremadura)

Being one of the most promising technologies nowadays, Web Services are at the crossing of distributed computing and loosely coupled systems. Although vendors provide multiple platforms for service implementation, service integrators, developers and providers demand approaches for managing service-oriented applications at all stages of development. In this sense, approaches such as Model-Driven Development (MDD) and Service Component Architecture (SCA) can be used in conjunction for modeling and integrating services independently of the underlying platform technology. Besides, WS-Policy provides a XML-based standard description for extra-functional properties, which remains independent of both the final property implementation and the binding to the service in question. In this paper we propose a cross-disciplinary approach, in which the aforementioned MDD, SCA and WS-Policy are assembled in order to develop extra-functional properties in web services from a platform independent model. This model is later transformed into platform specific models, from which final code is automatically generated.

 

 

Top Down Versus Bottom up in Service Oriented Integration: an MDA-based Solution for Minimizing the Technology Coupling

Theo Dirk Meijler, Gert Kruithof, Nick van Beest (University of Groningen)

Service Oriented integration typically combines top-down development and bottom-up reverse engineering. Top-down development starts from requirements and ends at implementation. Bottom-up reverse engineering starts from available components and data sources. Often, the integrating business processes are directly linked to the reverse engineered web services, resulting in a strong technology coupling. This in turn leads to low maintainability and low reusability. The Model Driven Architecture (MDA) provides an approach to achieve technology independency through full top-down development. However, that approach does not handle well bottom-up reverse engineered components. In this paper, an approach is introduced in which top-down and bottom-up realization are combined while minimizing the technology coupling. This is done through an explicit buffering between top down and bottom up. “High-level” web services are derived through top-down development, “Low-level” web services are reverse engineered, and a mapping is created between the two. The approach incorporates reverse engineering of web services while retaining the advantages of the MDA of platform independency, maintainability and reusability. Using a typical business case from the financial sector the advantages and disadvantages of the presented approach are demonstrated.

 

 

WSMX: A Semantic Service Oriented Middleware for B2B Integration

Matthew John Moran (Digital Enterprise Research Institute)

In this paper we present how Semantic Web Service technology can be used to overcome process and data heterogeneity in a B2B integration scenario. While one partner uses RosettaNet for message exchange process and message definition, the other one operates on a proprietary solution based on a combination of WSDL and XML Schema. For this scenario we show the benefits of semantic descriptions which are used within the integration process enabling data and process mediation of services. We illustrate this integration process on the WSMX -- a middleware platform conforming to the principles of a Semantic Service Oriented Architecture.

 

 

Automated Discovery of Compositions of Services described with Separate Ontologies

Antonio Brogi (University of Pisa), Sara Corfini (University of Pisa), José Aldana (Universidad de M´alaga), Ismael Navas (Universidad de M´alaga)

The synergy between the emerging areas of Semantic Web and Web services is promoting the development of so-called "semantic Web services". A semantic Web service is a software component which self-describes its functionalities by annotating them with (instances of) concepts formally defined by means of ontologies.

The development of fully-automated, semantics-based service discovery mechanisms constitutes a major challenge in this context, and it raises several important issues. One of them is the ability of coping with different ontologies, as different services are typically described in terms of different ontologies. Another important feature is the capability of discovering service compositions rather than single services. Indeed it is often the case that a client query cannot be fulfilled by a single service, while it may be fulfilled by a suitable composition of services. Last, but not least, efficiency is obviously an important objective of service discovery mechanisms.

In this paper, we present a matchmaking system that exploits ontology-based service descriptions to discover service compositions capable of satisfying a client request. Efficiency is achieved by pre-computing off-line a (hyper)graph that represents the functional dependencies among different (sub)services. The notion of semantic field is employed to cross different ontologies.

 

 

Dynamic Web Service Selection and Composition: An Approach based on Agents Dialogues

Yasmine Charif-Djebbar, Nicolas Sabouret (Laboratoire d'Informatique de Paris 6)

In this paper, we are motivated by the problem of automatically and dynamically selecting and composing services for the satisfaction of user requirements. We propose an approach in which agents perform service composition through unplanned interactions. We present an architecture based on agents that offer semantic web services and that are capable of reasoning about their services' functionalities. These agents are provided with an interaction protocol that allows them, trough dialogue games, to select and compose appropriate services' functionalities in order to fulfill a complex set of requirements specified by a user.

Three features distinguish our proposal from other work in the area. First, in our approach web services are pro-active and can thus engage in complex conversations, instead of just simple invocation and response with a result or error. Second, since it is performed through a dialogue model, our composition process doesn't require predefined plans or the order of invocation of the services' operations. Finally, the use of agent technology allows for the expression of global constraints, making it possible to select services according to specific user requirements.

 

 

Examining Usage Protocols for Service Discovery

Rimon Mikhaiel, Eleni Stroulia (University of Alberta)

To date web-service discovery has followed the traditional component discovery methodology and has examined signature matching, specification matching and information retrieval methods, based on the WSDL specification and documentation. WSDL specifications, however, can be information poor, with standard data types, unintuitive identifiers and no documentation. The nature of the usage of the WSDL elements in the context of a BPEL composition can be an extremely useful source of information in the context of service discovery. In this paper, we discuss our method for service discovery using interface and usage matching, i.e., WSDL-and-BPEL matching. Our approach views both WSDL and BPEL as hierarchical structures and uses tree alignment to compare them. We illustrate our method with two example scenarios.

 

 

Leveraging Web Services Discovery with Customizable Hybrid Matching

Vincenzo D'Andrea (DIT-University of Trento), Willem-Jan van den Heuvel (Tilburg University) and Natallia Kokash (DIT-University of Trento)

With the growing popularity of service-oriented computing the problem of web service discovery becomes increasingly important. Many matching algorithms for comparing user requests to available service interfaces have been proposed but still there are no consistent experimental results and comparative evaluation of the methods. This paper contributes to web service discovery in several ways. First, we explain state-of-the-art approaches to uniform matching using syntactic, semantic and structural information from service interface descriptions. Second, this paper assesses the efficacy of the tf-idf heuristic and two WordNet-based semantic similarity metrics on the uniform web service collection. Finally, this paper introduces scenarios illustrating hybrid matching that can help to overcome the drawbacks of the uniform approaches.

 

 

Semantic Service Mediation

Liangzhao Zeng (IBM) , Boualem Benatallah (University of New SouthWales), Hui Lei (IBM)

The service mediation that decouples service interactions is a key component in supporting the implementation of SOA solutions cross enterprises. The decoupling is achieved by having the consumers and providers interact via an intermediary. The earliest service mediations are keyword and value-based, which require both service providers and consumers to adhere same data formats in defining service interfaces and requests. As such, these service mediations are inadequate for supporting interaction among services in heterogeneous and dynamic environments. In order to overcome this limitation, semantics are introduced into service mediations, for more flexible service matching. Different from existing semantic service mediations, our system uses ontologies not only for one-to-one service matching, but also for one-to-multiple service matching and composting services by correlation as part of the service mediation. Therefore, by engineering ontologies, our system allows different services to interact using the interfaces in their native formats. Further, by performing service correlation systematically, services can be composted automatically, without any programming efforts (neither composition rules nor process models). We argue that a service mediator like ours enables flexible and on-demand composition among services.

 

 

Sliver: A BPEL Workflow Process Execution Engine for Mobile Devices

Gregory Hackmann, Mart Haitjema, Christopher Gill, Gruia-Catalin Roman (Washington University in St. Louis)

The Business Process Execution Language (BPEL) has become the dominant means for expressing traditional business processes as workflows. The widespread deployment of mobile devices like PDAs and mobile phones has created a vast computational and communication resource for these workflows to exploit. However, BPEL so far has been deployed only on relatively heavyweight server platforms such as Apache Tomcat, leaving the potential created by these lower-end devices untapped. This paper presents Sliver, a BPEL workflow process execution engine that supports a wide variety of devices ranging from mobile phones to desktop PCs. We discuss the design decisions that allow Sliver to operate within the limited resources of a mobile phone or PDA. We also evaluate the performance of a prototype implementation of Sliver.

 

 

Assembly of Business Systems Using Service Component Architecture

Anish Karmarkar (Oracle), Mike Edwards (IBM)

Service Component Architecture (SCA) is a set of specifications which provide a programming model for the creation and assembly of business systems using a service oriented architecture. SCA uses service components as the building blocks of business systems. SCA supports service components written using a very wide range of technologies, including programming languages such as Java, BPEL, C++ and also declarative languages such as XSLT. SCA also provides a composition model for the assembly of distributed groups of service components into a business solution, with composites used to groupcollections of components and wires modeling the connections between components.

SCA aims to remove “middleware” concerns from the programming code, by applying infrastructure concerns declaratively to compositions, including aspects such as Security and Transactions.

SCA is being evolved by an industry collaboration, with the aim of eventual submission to a standards body.

 

 

The End of Business as Usual: Service-Oriented Business Transformation

Andy Mulholland (CapGemini)

Abstract pending

 

 

A priori conformance verification for guaranteeing interoperability in open environments

Matteo Baldoni, Cristina Baroglio, Alberto Martelli, and Viviana Patti (Universita' di Torino)

An important issue, in an open environment like the web, is to have capability of guaranteeing the interoperability of a set of services. When the interaction scheme that the services should followed is given a priori (e.g. as a choreography or as an interaction protocol), it becomes possible to verify, before the interaction takes place, if the interactive behavior of a service (e.g. a BPEL process specification) respects it. This verification is known as ``conformance test". Recently some attemps have been done for defining conformance tests w.r.t. a protocol but these approaches fail in capturing the very nature of interoperability, turning out to be too restrictive. In this work we will give a simple definition of interaction protocol based on message exchange and on finite state automata, and we will focus on the attempt of capturing those properties that are essential to the verification the interoperability of a set of services. In particular, our desire is to define a conformance test that can guarantee a priori the interoperability of a set of peers by verifying properties of the single peer against the choreography. This capability is particularly relevant in open environments, like the web, where services are identified and composed on demand and dynamically and the system as a whole cannot be analysed.

 

 

Interaction Soundness for Service Orchestrations

Frank Puhlmann, Mathias Weske (Hasso-Plattner-Institute)

Service oriented architectures (SOA) comprise service orchestrations and choreographies. While orchestrations are strongly related to workflow as a formal foundation, choreographies do not have such a foundation yet. A formal foundation allows for analyzing different properties like soundness for orchestrations, conformance between concrete orchestrations and choreographies, as well as providing an unambiguous description of a SOA. This paper introduces lazy soundness for choreographies. Lazy soundness is a new kind of soundness well suited for SOA, since it supports so called left-behind or lazy activities required for expanding soundness from orchestrations to choreographies.

 

 

Modeling Web Services by Iterative Reformulation of Functional and Non-Functional Requirements

Jyotishman Pathak, Samik Basu, Vasant Honavar (Iowa State University)

We propose a new framework for modeling Web services based on the techniques of abstraction, composition and refinement. The approach allows users to specify an abstract and possibly incomplete specification of the desired (goal) functionality. This specification is used to select a set of suitable component services such that their composition is equivalent to the goal service, thereby mimicking the requested functionality. However, in the event such a composition is not realizable, our approach determines the cause of the failure, which can be used to refine the goal specification. The above steps are repeated with the refined goal iteratively until a feasible composition is identified or the user decides to abort.

 

 

SOCK: a calculus for service oriented computing

Claudio Guidi, Roberto Lucchi, Nadia Busi, Gianluigi Zavattaro, Roberto Gorrieri (University of Bologna)

Service oriented computing is an emerging paradigm for designing distributed applications where service and composition are the main concepts it is based upon. In this context formal methods can contribute by allowing for the development of properties verification and basic languages for supporting system design. In this paper we propose SOCK, a three-layered calculus equipped with a formal semantics, for addressing all the basic mechanisms of service communication and composition. The main contribute of our work is the development of a formal framework where the service design is decomposed into three fundamental parts: the behaviour, the declaration and the composition where each part can be designed independently of the other ones.

 

 

A Model-Driven Development Approach to Creating Service-Oriented Solutions

Simon Johnston, Alan W Brown (IBM)

One of the most pressing problems organizations face as they adopt a service-based infrastructure involves helping them to effectively transition to service-oriented design of applications. Great benefit could be gained by using a well-defined, repeatable approach to the modeling of business domains from a services perspective that supports the application of automated approaches to realize a service-based solution.

In this paper we explore model-driven approaches to the realization of service-oriented solutions. We describe a services-oriented design approach that utilizes a UML profile for software services as the design notation for expressing the design of a services-oriented solution. We describe how a services model expressed in this UML profile can be transformed into a specific service implementation, and describe the design-to-implementation mapping. We then comment on how these technology elements play in an overall MDD approach for SOA.

 

 

SCA Policy Association Framework

Michael Beisiegel (IBM), Chris Sharp (IBM), Ashok Malhotra (Oracle), Greg Pavlik (Oracle), Nickolas Kavantzas (Oracle)

SCA (Service Component Architecture) is a collaborative effort by the leading vendors in the enterprise software space to define an architecture for Service Oriented computing[1]. The goal is to allow reusable services written in different languages and using different access methods and container technologies to be combined together into composite applications. The collaboration started in 2005 and is ongoing.

This paper describes the SCA policy association framework that allows policies and policy subjects specified using WS-Policy[2] and WS-PolicyAttachment[3] to be associated with composite applications and processes. SCA supports two models for associating policies with components. In the first, the SCA developer provides relatively abstract “policy hints”. These hints are bound to concrete policies at deployment time. The second model allows the SCA developer to refer directly to externally managed policies. This is important in scenarios where policies are stored in a repository and actively managed, possibly by third-parties. In both cases, policy association is very simple: a single statement refers either to a cross-domain Policy Set or to a collection of policy hints contained in a Policy Profile.

 

 

Towards Adaptive Management of QoS-aware Service Compositions - Functional Architecture

Mariusz Momotko (Rodan Systems), Michał Gajewski (Rodan Systems), André Ludwig (University of Leipzig), Ryszard Kowalczyk (Swinburne University of Technology), Marek Kowalkiewicz (Poznan University of Economics), Jian Ying Zhang (Swinburne University of Technology)

Service compositions enable users to realize their complex needs as a single request. Despite intensive research, especially in the area of business processes, web services and grids, an open and valid question is still how to manage service compositions in order to satisfy both functional and non‑functional requirements as well as adapt to dynamic changes. In this paper we propose an (functional) architecture for adaptive management of QoS‑aware service compositions proposed in an EU‑founded Adaptive Service Grid (ASG) project. This architecture may support various execution strategies ( described in another paper submitted to ICSOC’2006, research track) based on dynamic selection and negotiation of services included in a service composition, contracting based on service level agreements, service enactment with flexible support for exception handling, monitoring of service level objectives, and profiling of execution data. The architecture is defined in terms of software components, their public interfaces and interaction among them according to the basic tasks common for all execution strategies. A first prototypical implementation of this architecture has been developed within the ASG project.

 

 

Extending Web Services Transactions with Common Business Primitives

Mike P. Papazoglou, Benedikt Kratz (Tilburg University)

Advanced business applications typically involve well-defined business functions such as payment processing, shipping and tracking, coordinating and managing marketing strategies, determining new product offerings, granting/extending credit, managing market risk and so on. These reflect commonly standard business functions that apply to a variety of application scenarios. Although such business functions drive transactional applications between trading partners they are completely external to current Web services transaction mechanisms and are only expressed as part of application logic.

To remedy this situation, this paper proposes a new Web services transaction model and support mechanisms. The model allows expressing business and QoS aware transactions on the basis of business functions such as payment and credit conditions, delivery conditions, business agreements stipulated in SLAs, liabilities and dispute resolution policies, and blends these business functions with QoS criteria.

 

 

Licensing Services: Formal Analysis and Implementation

Vincenzo D'Andrea, G.R. Gangadharan (University of Trento)

The distribution of services spanning across organizational boundaries raises problems related to intellectual value that are less explored in service oriented research. Being a way to manage the rights between service consumers and service providers, licenses are critical to be considered in services. As the nature of services differs significantly from traditional software and components, services prevent the direct adoption of software and component licenses. For drafting a family of machine-readable licenses, the clauses of a service license should be unambiguous. We propose a formalisation of licensing clauses specific to services for unambiguous definition of a license. We extend Open Digital Rights Language (ODRL) to implement the clauses of service licensing, making a service license compatible with all the existing service standards.

 

 

QoS Assessment of Providers with Complex Behaviours: An Expectation-Based Approach with Confidence

Gareth Shercliff, Jianhua Shao, W. Alex Gray, Nick J. Fiddian (Cardiff University)

Service Level Agreements (SLAs) define a set of consumer expectations which must be met by a provider if a contract is not to be broken. Since providers will potentially be providing many different services to thousands of different consumers at any given point, they must adopt an efficient policy for resource management which classifies consumers into service ranges. Existing approaches to QoS assessment of providers assume that the policy of a provider with respect to consumers is handled on an individual basis. We maintain that such approaches are ineffective when providers adopt a policy based on service differentiation and in response introduce and evaluate an expectation-based approach to QoS assessment which presupposes the classification of consumers into ranges defined by their expectation. As well as carrying out assessment to determine the likely future behaviour of a provider for a given consumer expectation, we attach a confidence value to our assessment to indicate the level of certainty that the result is accurate. Our results suggest that our confidence-based approach can help consumers make better informed decisions in order to find the providers that best meet their needs.

 

 

A QoS-aware Selection Model for Semantic Web Services

Xia Wang, Tomas Vitvar, Mick Kerrigan, Ioan Toma (DERI)

Automating Service Oriented Architectures by augmenting them with semantics will form the basis of the next generation of computing. Selection of service still is an important challenge, especially, when a set of services fulfilling user's capabilities requirements have been discovered, among these services which one will be eventually invoked by user is very critical, generally depending on a combined evaluation of qualities of services (Qos). This paper proposes a QoS-based selection of services. Initially we specify a QoS ontology and its vocabulary using the Web Services Modeling Ontology (WSMO) for annotating service descriptions with QoS data. We continue by defining quality attributes and their respective measurements along with a QoS selection model. Finally, we present a fair and dynamic selection mechanism, using an optimum normalization algorithm.

 

 

BPEL-Unit: JUnit for BPEL processes

Zhong jie Li, Wei Sun (IBM)

Thanks to unit test frameworks such as JUnit, unit testing has become a common practice in object-oriented software development. However, its application in business process programming is far from prevalent. Business process unit testing treats an individual process as the unit under test, and tests its internal logic thoroughly by isolating it from the partner processes. This types of testing cannot be done by current web service testing technologies that are black-box based. This paper proposes an approach to unit testing of Business Process Execution Language for Web services (BPEL4WS), and introduces a tool prototype named BPEL-Unit, which extends JUnit. The key idea of this approach is to transform process interaction via web service invocations to class collaboration via method calls, and then apply object-oriented test frameworks. BPEL-Unit provides the following advantages: allow developers simulate partner processes easily, simplify test case writing, speed test case execution, and enable automatic regression testing. With BPEL-Unit, BPEL process unit testing can be performed in a standardized, unified and efficient way.

 

 

UML-based Service Discovery Framework

Andrea Zisman, George Spanoudakis (City University)

The development of service centric systems in which software systems are constructed as compositions of autonomous services has been recognised as an important approach for software system development. Recently, we have been experiencing a proliferation of systems being developed, deployed, and consumed in this way. An important aspect of service centric systems is the identification of web services that can be combined to fulfill the functionality and quality criteria of the system being developed. In this paper we present the results of the evaluation of a UML-based framework for service discovery. Our UML-based framework supports the identification of services that can provide the functionality and satisfy properties and constraints of service centric systems specified during the design phase of the development life-cycle. Our approach adopts an iterative design process and allows for the (re-)formulation of the design models of service centric systems based on the discovered services. A prototype tool has been developed and includes (a) a UML 2.0 integration module, which derives queries from behavioural and structural UML design models and integrates the results of the queries; and (b) a query execution engine, which performs queries against service registries based on similarity analysis.

 

 

 

 

NESSI: Transforming the Internet to Service your Life

Stefano De Panfilis, (Engineering Ingegneria Informatica, Coordinator - NESSI SRA Committee) and Frederic Gittler (HP Laboratories, Vice-Chair - NESSI Steering Committee)

 

This session introduces the context, relevance and main objectives of NESSI, the European Software and Services Technology Platform.  The talk then focuses on the Strategic Research Agenda, presenting the structure, approach, and priorities to deliver the technology and systemic foundations for the Knowledge Economy through the European Framework Programme 7 and other initiatives.  The presentation also describes the NESSI open participation model, based on a structure of working groups, providing an answer to the question "How can I join?".