JOURNAL OF DIGITAL INFORMATION MANAGEMENT

(ISSN 0972-7272) The peer reviewed  journal

Home | Aim&Scope | Editorial Board | Author Guidelines | Publisher | Subscription | Current Issue | Contact Us |

Volume 4 Issue 1 March 2006

Abstracts

HyperMap: A System to Map the Web

Célia Ghedini Ralha, José Carlos Loureiro Ralha
Computer Science Department
University of Brasilia
Campus Universitário Darcy Ribeiro
Caixa Postal 4466 - CEP 70.919-970 - Brazil
Email: {ghedini,ralha}@cic.unb.br


Abstract

This paper presents the HyperMap system which dynamically structures information on the Web. The system is based on a framework resulting from multi-disciplinary research, which is briefly discussed. This paper focuses work done to develop a new generation of computational tools able to support the extraction and the discovery of useful information on the Web, becoming agents of innovation and development.


The Designing of Web Services to Deliver Web Documents Associated with Historical Links

David Chao
Department of Information Systems
College of Business
San Francisco State University
E-mail: dchao@sfsu.edu   

Sam Gill
Department of Information Systems
College of Business
San Francisco State University
E-mail: sgill@sfsu.edu   

Abstract

The historical links of a web site include the URLs invalidated due to web site reorganization, document removal, renaming or relocation, or links to document snapshots, which are defined as the document’s contents as of a specific point in time. Tracking historical links will allow users to use out-of-date URLs, retrieve removed documents and document snapshots. This paper presents a logging and archiving scheme to track a source document’s history of changes, and designs web services to deliver the source document associated with a historical link.


Creating Dependable Web Services Using User-transparent Replica

Markus Hillenbrand, Joachim Götze, and Paul Müller
University of Kaiserslautern
Department of Computer Science
67663 Kaiserslautern, Germany
http://www.icsy.de
Email: {hillenbr,j_goetze,pmueller}@informatik.uni-kl.de   

Abstract

Dependability is a major concern in software development, deployment, and operation. A commonly accepted solution for providing fault tolerant services on the Internet is to create replica of the services and to deploy them to several hosts. Whenever the service or the underlying node or network fails, another service is ready to take over. In the Venice project, several techniques are combined to create a dependable framework for deploying and managing distributed services using replica on several distinct network nodes.


Automatic and Flexible Web Services Discovery in Semantic Web*


Liuming Lu     Jiaxun Chen     Guojin Zhu
Intermedia,
School of Computer Science, Donghua University, Shanghai 200051, P.R. China
Email: lmlu@mail.dhu.edu.cn     jxchen@dhu.edu.cn     gjzhu@dhu.edu.cn       
 


Abstract

The main objective of the semantic web is to make the interoperation among web services or between users and web services more automated and flexible. The basic step toward the interoperation is that users or web services can discover their required web services. In this paper we present a practical application on Semantic Web and Web Services concepts where a flexible and automatic matching procedure in e-learning is shown. We illustrate how the requirements, web services and domain knowledge are described in machine-understandable forms to support the automatic and flexible discovery of web services. Then a matchmaking algorithm based on the semantic information is proposed. Finally the design and implementation of a prototype for the automatic discovery of web services is described.
 


A Dynamic Priority Allocation Scheme of Messages for Differentiated Web Services Satisfying Service Level Agreement


Dongjoon Kim,     Sangkyu Lee,     Ajith Abraham     and     Sangyong Han
Department of Computer Science and Engineering, Chung-Ang University
221 Heuk-Seouk Dong, Dongjak Gu, Seoul, Korea
djkim@tysystems.com,     sklee@archi.cse.cau.ac.kr,     ajith.abraham@ieee.org,     hansy@cau.ac.kr   


Abstract

Recently many enterprises are adopting web services as the standard between heterogeneous software in the XML message based distributed environment to carry out businesses from B2C to B2B. For effective application of web services, differentiated service quality must be guaranteed. However, majority of the current web services do not differentiate quality of messages and the current web servers do not reflect the quality factors of the service level agreement settled between the service provider and user.
Our research analyzes the appropriate quality factor for the quality level where differentiated service is provided and suggests a method for assigning priorities to web service message processing processes based on these quality factors. The suggested method assigns the priority dynamically in order to satisfy the service level agreement as much as possible.
 


Hybrid Storage Scheme for RDF Data Management in Semantic Web


Sung Wan Kim
Department of Computer Information, Sahmyook College
Chungryang P.O. Box118, Seoul 139-742, Korea
Email: swkim@syu.ac.kr   


Abstract

With the advent of the Semantic Web as the next-generation of Web technology, large volumes of Semantic Web data described in RDF will appear in the near future. Most previous approaches treat RDF data as a form of triple and store them in a large-sized relational table. Basically, since it always requires the whole table to be scanned for processing a query, it may degrade retrieval performance. In addition, it does not scale well. In this paper, we propose a hybrid storage approach for RDF data management. The proposed approach aims to provide good query performance, scalability, manageability, and flexibility. To achieve these goals, we distinguish some frequently appeared properties in RDF data. A set of RDF data with a distinguished property is independently treated and stored together in a corresponding property-based table. For processing a query having a specific property, we can avoid full scanning the whole data and only have to access a corresponding table. For queries having specific properties, the proposed scheme achieves better performance than the previous approach.


Applying AOP Concepts to Increase Web Services Flexibility


Mehdi Ben Hmida1, Ricardo Ferraz Tomaz2, Valérie Monfort1,2
1 Université Paris IX Dauphine LAMSADE
Place du Maréchal de Lattre Tassigny, Paris Cedex 16
Email: mehdi.benhmida@etud.dauphine.fr   

2 Université Paris 1 - Panthéon - Sorbonne
Centre de Recherche en Informatique, 90 rue de Tolbiac 75634 Paris cedex 13
Email: {Ricardo.Ferraz-Tomaz,valerie.Monfort}@malix.univ-paris1.fr


Abstract

Web Service is the fitted technical solution which provides the required loose coupling to achieve Service Oriented Architecture (SOA). In previous works, we proposed an approach, using the Aspect Oriented Programming (AOP) paradigm, to increase the adaptability of Web Services. This approach suffers from some deficiencies as dependency for both the programming language (Java) and the SOAP engine (AXIS). In this paper, we propose to increase the adaptability of Web Services by using the main AOP agreed semantics - Advices, Pointcuts and Joinpoints- to change original Web Service behavior. In the new approach, we consider that advices are themselves Web Services. Moreover, we propose to use an XML Language to describe Pointcuts, Joinpoints and for referencing advices. The invocation of advices (Web Services) is accomplished by an XQuery engine to ensure SOAP Engine independency and advices are implemented as Web Services to promote programming language independency


A Model for the Aggregation of QoS in WS Compositions
Involving Redundant Services
 


Michael C. Jaeger and Hendrik Ladner
Berlin University of Technology
Institute of Telecommunication Systems
Formal Models, Logics and Programming (FLP),
Sek. FR6-10, Franklinstrasse 28/29
D-10587 Berlin, Germany
Email: mcj@cs.tu-berlin.de and hendrik.ladner@gmx.de       
 


Abstract

A composition arranges available services resulting in a defined flow of tasks. A discovery process identifies the suitable candidate services for a composition. A subsequent selection process chooses the optimal candidate for each task. The selection process can consider different quality-of-service (QoS) categories of the individual services to optimise the quality of the whole composition. The result is a quality-optimised assignment of candidates to each task.

In this work, we discuss the case how already identified candidates, which a selection process originally has separated out, can improve a composition w.r.t. particular QoS categories. To realise this improvement, redundant arrangements involve the alternative candidates in order to supplement the originally assigned service. We formed these redundant arrangements on the basis of our previously introduced compositions patterns. The contribution of this work is a computational model that allows the aggregation of the QoS for the composition when these arrangements are applied. This work presents extensions and refinements to our previous work [jaegeretal05] where the basic concepts of this computational model were already discussed .
 


 

RDF-based Peer-to-Peer Based Ontology Editing


Peter Becker, Peter Eklund and Natalyia Roberts
School of Economics and Information Systems
The University of Wollongong
Northfields Avenue, NSW 2522
Australia
Email: peklund@uow.edu.au   


Abstract

The evolution of the Semantic Web has accelerated the need for ontologies. There is a need to build new ontologies and to extend and merge existing ontologies. To achieve this we need software tools to edit and build ontologies. Our paper describes software to develop a protocol for collaborative ontology editing based on RDF and using a Peer-to-Peer (P2P) networking architecture. The protocol allows for the implementation a voting mechanism embedded into the RDF data itself, using a mixed initiative design for notification. This is implemented as extensions to an ontology browser called ONTORAMA . The P2P approach is compared to the classic ontology editing approaches and the special requirements of the ontology editing environment are discussed. The protocol, design, implementation and architecture for ontology update are also elaborated.
 


 

TWSO – Transactional Web Service Orchestrations


Peter Hrastnik
ec3 – Electronic Commerce Competence Center
Donau-City-Straße 1, A-1220 Vienna, Austria
peter.hrastnik@ec3.at Werner Winiwarter
Departement of Scientific Computing
University of Vienna
Universitätsstraße 5, A-1010 Vienna, Austria
Email: werner.winiwarter@univie.ac.at   

 


Abstract

There is a need for transactional processing in the Web service world. Software industry responded to this need by publishing a couple of Web service transaction proposals that are quite alike. However, these proposals define basically only communication protocols that indirectly implement advanced transaction models. The proposals lack accurate usage suggestions and the rather obvious question “How can I use transactions in Web service based distributed systems?” is not covered anywhere satisfyingly. The use of arbitrary advanced transaction models is provided only by some of the proposals and likely requires an update of various transaction system components. This paper introduces TWSO (Transactional Web Service Orchestrations), a new approach to integrate transactional processing with Web service orchestrations. It tries to overcome the hassles stated above. TWSO concepts may appear in different manifestations, like an XML vocabulary (TWSOL) or an API for Java (TWSO4J). Constructs of TWSO manifestations are intended to be directly incorporated in Web service orchestration definitions. The usage pattern of TWSO is designed to resemble the programming pattern used when application programmers use transaction–enabled components like databases or application servers. Moreover, arbitrary advanced transaction models can be synthesized by using a basic set of transaction primitives without the demand for system–updates.
 


 

Semantic indexing and hyperlinking of multimedia news: the RitroveRAI System


Roberto Basili, Marco Cammisa, Emanuele Donati, Alessandro Moschitti
University of Roma, Tor Vergata
Department of Computer Science,
Via del Politecnico snc, 00133 Roma
Email: {basili,cammisa,donati,moschitti}@info.uniroma2.it   
 


Abstract

Web services tend to offer functional and non-functional requirements and capabilities in an agreed, machine-readable format. The target is to provide automated services for discovering, selection and binding of information as a native capability of middleware and applications. However, the major limitations are due to the lack of clear and processable semantics. In this paper, a system, RitroveRAI, addressing the general problem of enriching a multimedia news stream with semantic metadata is presented. News metadata are explicitly derived from transcribed sentences or implicitly expressed into a topical category automatically detected. The enrichment process is accomplished by aligning individual audiovisual segments with news or journal articles reachable trough the Web. This distributed process designed for RitroveRAI enables several extensions in light of large scale and public access performance as suggested in the discussion of the evaluation results of our current system.


 

A Hybrid Model to Improve Relevance in Document Retrieval


Tanveer J. Siddiqui
Department of Electronics & Communication
University of Allahabad
Allahabad
India
Email: tjs@jkinstitute.org   


Umashanker Tiwary
Indian Institute of Information Technology
Allahabad
India
Email: ust@iiita.ac.in   


Abstract

In information retrieval community a lot of work is focused on increasing efficiency by capturing statistical features. The other dominant approach is to improve the relevance by capturing the semantic and contextual information which is invariably inefficient. Generally the two approaches are assumed to be diametrically opposite. In this paper we have tried to combine the two approaches by proposing a hybrid information retrieval model. The model works in two stages. The first stage is a statistical model and the second stage is based on semantics. We have first downsized the document collection for a given query using vector model and then used a conceptual graph (CG) based representation to rank the documents. Our main objective is to investigate the use of conceptual graphs as a precision tool in the second stage. The use of CGs brings semantic in the ranking process resulting in improved relevance. Three experiments have been conducted to demonstrate the feasibility and usefulness of our model. A test run is made on CACM-3204 collection. We observed 34.8% increase in precision for a subset of CACM queries. The second experiment is performed on a test collection specifically designed to test the strength of our model in situation where the same terms are being used in different context. Improved relevance has been observed in this case also. The application of this approach on results retrieved from LYCOS shown significant improvement. The proposed model proposed is both efficient, scalable and domain independent.


 

Continuously Evaluating Approximate Similarity Search on High Dimensional Data Stream


Weiping Wang1, Jianzhong Li12, Chunyu Ai2 , Shengfei Shi1
1School of Computer Science and Technology, Harbin Institute of Technology, China
2 School of Computer Science and Technology, Heilongjiang University, China
Email: {wpwang, lijzh, shengfei}@hit.edu.cn, aichunyu@hlju.edu.cn   


Abstract

In many applications, including online video monitoring system, it is often necessary to find out the most similar sequence, from a sequence set in database, for the high dimensional data stream. Due to its high computation complexity and the restrictions of query processing on data stream, continuously retrieving the similar content on high dimensional data stream is very challenging. The evaluating algorithm is expected to be very fast to match data stream arrival rate. To address this problem, we propose a method called CVNN in this paper. In our method, the sequences in database are represented into the compact summaries, namely CVs, which can be stored in memory. An online algorithm is conducted to transform data stream into a small number of CVs continuously, at the same time the nearest neighbor query is processed periodically based on the similarity of CVs. Experimental results show that our method is fairly effective.


 

Self Organizing Sensors by Minimization of Cluster Heads Using Intelligent Clustering


Kwangcheol Shin, Ajith Abraham and Sang Yong Han*
Department of Computer Science & Engineering
University of Minnesota, Minneapolis, MN 55455, USA

*School of Computer Science and Engineering, Chung-Ang University
221, Heukseok-dong, Dongjak-gu, Seoul 156-756, Korea


Abstract

Minimization of the number of cluster heads in a wireless sensor network is a very important problem to reduce channel contention and to improve the efficiency of the algorithm when executed at the level of cluster-heads. This paper proposes a Self Organizing Sensor (SOS) network based on an intelligent clustering algorithm which does not require many user defined parameters and random selection to form clusters like in Algorithm for Cluster Establishment (ACE) [2]. The proposed SOS algorithm is compared with ACE and the empirical results clearly illustrate that the SOS algorithm can reduce the number of cluster heads.


Home | Aim&Scope | Editorial Board | Author Guidelines | Publisher | Subscription | Current Issue | Contact Us |