Scientific Journals and Yearbooks Published at SAS

Article List

Computing and Informatics


Volume 28, 2009, No. 1

Content:


  Resource Aware Run-Time Adaptation Support for Recovery Strategies
R. Tirtea, G. Deconinck

Recovery strategy, fault tolerance, adaptation, resource monitoring, availability

The selection of recovery strategies is often based only on the types and circumstances of the failures. However, also changes in the environment such as fewer resources at node levels or degradation of quality-of-service should be considered before allocating a new process/task to another host or before taking reconfiguration decisions. In this paper we present why and how resource availability information should be considered for recovery strategies adaptation. Such resource aware run-time adaptation of recovery improves the availability and survivability of a system.

Computing and Informatics. Volume 28, 2009, No. 1: 3-28.

 
  Solving Large Scale Instances of the Distribution Design Problem Using Data Mining
H. Fraire, L. Cruz, J. Perez, R. Pazos, D. Romero, J. Frausto

Data mining, machine learning, distribution design problem

In this paper we approach the solution of large instances of the distribution design problem. The traditional approaches do not consider that the instance size can significantly reduce the efficiency of the solution process. We propose a new approach that includes compression methods to transform the original instance into a new one using data mining techniques. The goal of the transformation is to condense the operation access pattern of the original instance to reduce the amount of resources needed to solve the original instance, without significantly reducing the quality of its solution. In order to validate the approach, we tested it proposing two instance compression methods on a new model of the replicated version of the distribution design problem that incorporates generalized database objects. The experimental results show that our approach permits to reduce the computational resources needed for solving large instances by at least 65%, without significantly reducing the quality of its solution. Given the encouraging results, at the moment we are working on the design and implementation of efficient instance compression methods using other data mining techniques.

Computing and Informatics. Volume 28, 2009, No. 1: 29-56.

 
  CAOS Coach 2006 Simulation Team: An Opponent Modelling Approach
J. A. Iglesias, A. Ledezma, A. Sanchis

Team behaviour, opponent modelling, robocup, coach simulation, multi-agent system

Agent technology represents a very interesting new means for analyzing, designing and building complex software systems. Nowadays, agent modelling in multi-agent systems is increasingly becoming more complex and significant. RoboCup Coach Competition is an exciting competition in the RoboCup Soccer League and its main goal is to encourage research in multii-agent modelling. This paper describes a novel method used by the team CAOS (CAOS Coach 2006 Simulation Team) in this competition. The objective of the team is to model successfully the behaviour of a multi-agent system.

Computing and Informatics. Volume 28, 2009, No. 1: 57-80.

 
  Face Recognition Using Gabor-based Improved Supervised Locality Preserving Projections
Y. Jin, Q. Ruan

Face recognition, Gabor wavelets, two-directional 2DPCA, Locality Preserving Projections (LPP), Gabor-based Improved Supervised Locality Preserving Projections (Gabor-based ISLPP)

A novel Gabor-based Improved Supervised Locality Preserving Projections for face recognition is presented in this paper. This new algorithm is based on a combination of Gabor wavelets representation of face images and Improved Supervised Locality Preserving Projections for face recognition and it is robust to changes in illumination and facial expressions and poses. In this paper, Gabor filter is first designed to extract the features from the whole face images, and then a supervised locality preserving projections, which is improved by two-directional 2DPCA to eliminate redundancy among Gabor features, is used to augment these Gabor feature vectors derived from Gabor wavelets representation. The new algorithm benefits mostly from two aspects: One aspect is that Gabor wavelets are promoted for their useful properties, such as invariance to illumination, rotation, scale and translations, in feature extraction. The other is that the Improved Supervised Locality Preserving Projections not only provides a category label for each class in a training set, but also reduces more coefficients for image representation from two directions and boost the recognition speed. Experiments based on the ORL face database demonstrate the effectiveness and efficiency of the new method. Results show that our new algorithm outperforms the other popular approaches reported in the literature and achieves a much higher accurate recognition rate.

Computing and Informatics. Volume 28, 2009, No. 1: 81-95.

 
  Functional Testing of Processor Cores in FPGA-Based Applications
M. Wegrzyn, F. Novak, A. Biasizzo, M. Renovell

Built-in self-test, embedded processor core test, ffault injection, fault modelling, functional test, single-event upset

Embedded processor cores, which are widely used in SRAM-based FPGA applications, are candidates for SEU (Single Event Upset)-induced faults and need to be tested occasionally during system exploitation. Verifying a processor core is a difficult task, due to its complexity and the lack of user knowledge about the core-implementation details. In user applications, processor cores are normally tested by executing some kind of functional test in which the individual processor's instructions are tested with a set of deterministic test patterns, and the results are then compared with the stored reference values. For practical reasons the number of test patterns and corresponding results is usually small, which inherently leads to low fault coverage. In this paper we develop a concept that combines the whole instruction-set test into a compact test sequence, which can then be repeated with different input test patterns. This improves the fault coverage considerably with no additional memory requirements.

Computing and Informatics. Volume 28, 2009, No. 1: 97-113.

 
  Development and Optimization of Computational Chemistry Algorithms
Download Fulltext

G. Mazur, M. Makowski

Computational chemistry, optimizationĀ“, parallelization, software framework

The challenges specific to the development of computational chemistry software are discussed. Selected solutions are presented, including examples of algorithmic optimizations and improved load-balancing for parallel calculations. A software framework for development of new quantum-chemical algorithms is proposed. Key design points are discussed. Optimization techniques are briefly described. Important implementation aspects, like automatic code generation, are highlighted.

Computing and Informatics. Volume 28, 2009, No. 1: 115-125.

 
  Highly Efficient Twin Module Structure of 64-Bit Exponential Function Implemented on SGI RASC Platform
M. Wielgosz, E. Jamro, K. Wiatr

HPRC (High Performance Reconfigurable Computing), FPGA, elementary function, exponent function, RASC (Reconfigurable Application-Specific Computing)

This paper presents an implementation of the double precision exponential function. A novel table-based architecture, together with short Taylor expansion, provides a low latency (30 clock cycles) which is comparable to 32 bit implementations. A low area consumption of a single exp() module (roughtly 4% of XC4LX200) allows that several modules can be implemented in a single FPGAs.The employment of massive parallelism results in high performance of the module. Nevertheless, because of the external memory interface limitation, only a twin module structure is presented in this paper. This implementation aims primarily to meet quantum chemistry huge and strict requirements for precision and speed. Each module is capable of processing at speed of 200MHz with max. error of 1 ulp, RMSE equals 0.62

Computing and Informatics. Volume 28, 2009, No. 1: 127-137.

 
  Parallel Simulation of a Fluid Flow by Means of the SPH Method: OpenMP vs. MPI Comparison
P. Wroblewski, K. Boryczko

Computational fluid dynamics, SPH, parallel computing, OpenMP, MPI

The SPH method for simulating incompressible fluids is presented in this article. The background and principles of the SPH method are explained and its application to incompressible fluids simulations is discussed. The parallel implementation of the SPH simulation with OpenMP and MPI environments are demonstrated. Both models of parallel implementation are analyzed and discussed. The comparison of both models is performed and discussed, as well as their results.

Computing and Informatics. Volume 28, 2009, No. 1: 139-150.