Scientific Journals and Yearbooks Published at SAS

Article List

Computing and Informatics

Volume 28, 2009, No. 5


Incorporating structured text retrieval into the extended Boolean model.

M. C. du Plessis, G. de V. de Kock

Hybrid search algorithms, biographical database, structured text retrieval, extended Boolean model, searches on dates

Conventional information retrieval models are inappropriate for use in databases containing semi-structured biographical data. A hybrid algorithm that effectively addresses many of the problems in searching biographical databases is presented in this article. An overview of applicable structured text retrieval algorithms is given, with focus specifically on the tree matching model. Small adaptations to the Extended Boolean Model, to make it more applicable to biographical databases, are described. The adaptation of tree matching models to the hierarchical nature of data in a person record is described and a distance function between query and record is defined. A hybrid model between the Extended Boolean Model and the adapted Tree Matching Model is then presented. A fast ranking algorithm appropriate for general searches and a more effective (but more resource intensive) algorithm for more advanced searches is given. It is shown how dates can be incorporated in the hybrid model to create a more powerful search algorithm. The hybrid algorithm can be used to rank records in descending order of relevance to a user's query.

How to cite (APA format):
C. du Plessis, M, de V. de Kock, G. (2009). Incorporating structured text retrieval into the extended Boolean model. Computing and Informatics, 28(5), 581-597.

Molecular solutions for double and partial digest problems in polynomial time.

M Ganjtabesh, H. Ahrabian, A. Nowzari-Dalini

DNA computing, bioinformatics, digest problem

A fundamental problem in computational biology is the construction of physical maps of chromosomes from the hybridization experiments between unique probes and clones of chromosome fragments. Double and partial digest problems are two intractable problems used to construct physical maps of DNA molecules in bioinformatics. Several approaches, including exponential algorithms and heuristic algorithms, have been proposed to tackle these problems. In this paper we present two polynomial time molecular algorithms for both problems. For this reason, a molecular model similar to Adleman and Lipton model is presented. The presented operations are simple and performed in polynomial time. Our algorithms are computationally simulated.

How to cite (APA format):
Ganjtabesh, M, Ahrabian, H, Nowzari-Dalini, A. (2009). Molecular solutions for double and partial digest problems in polynomial time. Computing and Informatics, 28(5), 599-618.

A situation-aware cross-platform architecture for ubiquitous game.

J.H. Han, D.H. Lee, H. Kim, H. P. In, H.S. Chae, Y.I. Eom

Cross-platform game, situation-aware middleware, conflict resolution

Multi-player online games (MOGs) are popular in these days. However, contemporary MOGs do not really support ubiquity in the sense that a seamless service across heterogeneous hardware platforms is not provided. This paper presents the architecture of the cross-platform online game, which provides a service to users from heterogeneous platforms and is equipped with a situation-aware capability for enabling the users to seamlessly move between heterogeneous platforms. The experimental results through the prototype implementations show the feasibility of the situation-aware cross-platform game.

How to cite (APA format):
Han, J, Lee, D, Kim, H, P. In, H, Chae, H, Eom, Y. (2009). A situation-aware cross-platform architecture for ubiquitous game. Computing and Informatics, 28(5), 619-633.

Design and development of financial applications using ontology-based multi-agent systems.

P. Ray, N. Paramesh, W. Ying, A. Sujanani, D. Lee, R. Bhar

Ontology, multi-agent systems, financial services domain

Researchers in the field of finance now use increasingly sophisticated mathematical models that require intelligent software on high performance computing systems. Agent models to date that are designed for financial markets have their knowledge specified through low level programming that require technical expertise in software, not normally available with finance professionals. Hence there is a need for system development methodologies where domain experts and researchers and can specify the behaviour of the agent applications without any knowledge of the underlying agent software. This paper proposes an approach to achieve the above objectives through the use of ontologies that drive the behaviours of agents. This approach contributes towards the building of semantically aware intelligent services, where ontologies are used rather than low level programming to dictate the characteristics of the agent applications. This approach is expected to allow more extensive usage of multi-agent systems in financial business applications.

How to cite (APA format):
Ray, P, Paramesh, N, Ying, W, Sujanani, A, Lee, D, Bhar, R. (2009). Design and development of financial applications using ontology-based multi-agent systems. Computing and Informatics, 28(5), 635-654.

Verification of a fieldbus scheduling protocol using timed automata.

N. Petalidis

Fieldbus, formal verification, protocol verification, timed automata, UPPAAL

This paper deals with the formal verification of a fieldbus real-time scheduling mechanism, using the notion of timed-automata and the UPPAAL model checker. A new approach is proposed here that treats the set of schedulers that regulate access on a fieldbus as a separate entity, called the scheduling layer. In addition a network with a changing topology is considered, where nodes may be turned on or off. The behaviour of the scheduling layer in conjunction with the data link, the medium and the network management layer is examined and it is proved that it enjoys a number of desirable properties.

How to cite (APA format):
Petalidis, N. (2009). Verification of a fieldbus scheduling protocol using timed automata. Computing and Informatics, 28(5), 655-672.

The impact of the conflict on solving distributed constraint satisfaction problems.

S. Hosseini, K. Zamanifar

Distributed Constraint Satisfaction, APO, cooperative mediation, multi- agent

Distributed Constraint Satisfaction Problems (DCSPs) involve a vast number of AI andMulti-Agent problems. Many important efforts have been recen accomplished for solving these kinds of problems using both backtracking-based and mediation-based methods. One of the most successful mediation based algorithms in this field is Asynchronous Partial Overlay (APO) algorithm. By choosing some agents as mediators, APO tries to centralize portions of the distributed problem, and then each mediator tries to solve its centralized sub-problem. This work continues until the whole problem is solved. This paper presents a new strategy to select mediators. The main idea behind this strategy is that the number of mediators conflicts (violated constraints) impacts directly on its performance. Experimental results show that choosing the mediators with the most number of conflicts not only leads to considerable decrease in APO complexity, but also it can decrease the complexity of the other extensions of the APO such as IAPO algorithm. MaxCAPO and MaxCIAPO are two new expansions of APO which introduce this idea and are presented in this article. The results of using this mediator selection strategy show a rapid and desirable improvement over various parameters in comparison with APO and IAPO

How to cite (APA format):
Hosseini, S, Zamanifar, K. (2009). The impact of the conflict on solving distributed constraint satisfaction problems. Computing and Informatics, 28(5), 673-693.

Interactive visualisation of oligomer frequency in DNA.

M. Makula, L. Benuskova

Chaos game representation, iterated function systems, bioinformatics, information visualisation, fractal representation

Abstract. Since 1990, bioinformaticians have been exploring applications of the Chaos Game Representation (CGR) for visualisation, statistical characterisation and comparison of DNA sequences. We focus on the development of a new computational algorithm and description of new software tool that enables CGR visualisation of frequencies of K-mers (oligomers) in a flexible way such that it is possible to visualise the whole genome or any of its parts (like genes), and parallel comparison of several sequences, all in real time. User can interactively specify the size and position of visualised region of the DNA sequence, zoom in or out, and change parameters of visualisation. The tool has been written in JAVATM language and is freely available to public.

How to cite (APA format):
Makula, M, Benuskova, L. (2009). Interactive visualisation of oligomer frequency in DNA. Computing and Informatics, 28(5), 695-710.

Efficient computation and neural processing of astrometric images.

R. Cancelliere, M. Gai

Fourier transform, image analysis, neural network diagnosis

In this paper we show that in some peculiar cases, here the generation of astronomical images used for high precision astrometric measurements, an optimised implementation of the DFT algorithm can be more efficient than FFT. The application considered requires generation of large sets of data for the training and test sets needed for neural network estimation and removal of a systematic error called chromaticity. Also, the problem requires a convenient choice of image encoding parameters; in our case, the one-dimensional lowest order moments proved to be an adequate solution. These parameters are then used as inputs to a feed forward neural network, trained by backpropagation, to remove chromaticity.

How to cite (APA format):
Cancelliere, R, Gai, M. (2009). Efficient computation and neural processing of astrometric images. Computing and Informatics, 28(5), 711-727.

A hybrid algorithm for the longest common transposition-invariant subsequence problem.

S. Deorowicz, S. Grabowski

Longest common transposition-invariant subsequence (LCTS), bit- parallelism, sparse dynamic programming, string matching

The longest common transposition-invariant subsequence (LCTS) problem is a music information retrieval oriented variation of the classic LCS problem. There are basically only two known efficient approaches to calculate the length of the LCTS, one based on sparse dynamic programming and the other on bit-parallelism. In this work, we propose a hybrid algorithm picking the better of the two algorithms for individual subproblems. Experiments on music (MIDI), with 32-bit and 64-bit implementations, show that the proposed algorithm outperforms the faster of the two component algorithms by a factor of 1.4–2.0, depending on sequence lengths. Similar, if not better, improvements can be observed for random data with Gaussian distribution. Also for uniformly random data, the hybrid algorithm is the winner if the alphabet is neither too small (at least 32 symbols) nor too large (up to 128 symbols). Part of the success of our scheme is attributed to a quite robust component selection heuristic.

How to cite (APA format):
Deorowicz, S, Grabowski, S. (2009). A hybrid algorithm for the longest common transposition-invariant subsequence problem. Computing and Informatics, 28(5), 729-744.