Search Results

Now showing 1 - 10 of 39
  • Review
    Citation - WoS: 16
    Citation - Scopus: 20
    Assessing the Coverage of E-Health Services in Sub-Saharan Africa a Systematic Review and Analysis
    (Georg Thieme verlag Kg, 2017) Adeloye, Davies; Adigun, Taiwo; Misra, Sanjay; Omoregbe, Nicholas
    Background: E-Health has attracted growing interests globally. The relative lack of facilities, skills, funds and information on existing e-Health initiatives has affected progress on e-Health in Africa. Objectives: To review publicly available literature on e-Health in sub-Saharan Africa (sSA) towards providing information on existing and ongoing e-Health initiatives in the region. Methods: Searches of relevant literature were conducted on Medline, EMBASE and Global Health, with search dates set from 1990 to 2016. We included studies on e-Health initiatives (prototypes, designs, or completed projects) targeting population groups in sSA. Results: Our search returned 2322 hits, with 26 studies retained. Included studies were conducted in 14 countries across the four sub-regions in sSA (Central, East, South and West) and spreading over a 12-year period, 2002-2014. Six types of e-Health interventions were reported, with 17 studies (65%) based on telemedicine, followed by mHealth with 5 studies (19%). Other e-Health types include expert system, electronic medical records, e-mails, and online health module. Specific medical specialties covered include dermatology (19%), pathology (12%) and radiology (8%). Successes were 'widely reported' (representing 50% overall acceptance or positive feedbacks in a study) in 10 studies (38%). The prominent challenges reported were technical problems, poor inter net and connectivity, participants' selection biases, contextual issues, and lack of funds. Conclusion: E-Health is evolving in sSA, but with poorly published evidence. While we call for more quality research in the region, it is also important that population-wide policies and on-going e-Health initiatives are contextually feasible, acceptable, and sustainable.
  • Article
    Citation - WoS: 8
    Citation - Scopus: 18
    Featuring Cio: Roles, Skills and Soft Skills
    (Igi Global, 2013) Cano, Carmen; Fernandez-Sanz, Luis; Misra, Sanjay
    This paper describes how the CIO (Chief Information Officer) position appears as a key role in the organizations and the requirements for candidates. The authors compare the requirements presented in different studies to know what are the most important skills for a successful performance as a CIO. They stress the importance of non technical skills as key factors for professional performance. The authors have compared soft skills for CIO or equivalent positions and other professional profiles like programmers or analysts using data taken from thousands of job ads. An overview of the most valuable skills (especially soft skills) for CIOS is presented.
  • Article
    Citation - WoS: 8
    Citation - Scopus: 10
    Cobol Systems Migration To Soa: Assessing Antipatterns and Complexity
    (Kaunas Univ Technology, 2019) Mateos, Cristian; Zunino, Alejandro; Flores, Andres; Misra, Sanjay
    SOA and Web Services allow users to easily expose business functions to build larger distributed systems. However, legacy systems - mostly in COBOL - are left aside unless applying a migration approach. The main approaches are direct and indirect migration. The former implies wrapping COBOL programs with a thin layer of a Web Service oriented language/platform. The latter needs reengineering COBOL functions to a modern language/platform. In our previous work, we presented an intermediate approach based on direct migration where developed Web Services are later refactored to improve the quality of their interfaces. Refactorings mainly capture good practices inherent to indirect migration. For this, antipatterns for WSDL documents (common bad practices) are detected to prevent issues related to WSDLs understanding and discoverability. In this paper, we assess antipatterns of Web Services' WSDL documents generated upon the three migration approaches. In addition, generated Web Services' interfaces are measured in complexity to attend both comprehension and interoperability. We apply a metric suite (by Baski & Misra) to measure complexity on services interfaces - i.e., WSDL documents. Migrations of two real COBOL systems upon the three approaches were assessed on antipatterns evidences and the complexity level of the generated SOA frontiers - a total of 431 WSDL documents.
  • Article
    Citation - WoS: 6
    Citation - Scopus: 9
    Experimental Simulation-Based Performance Evaluation of an Sms-Based Emergency Geolocation Notification System
    (Hindawi Ltd, 2017) Osebor, Isibor; Misra, Sanjay; Omoregbe, Nicholas; Adewumi, Adewole; Fernandez-Sanz, Luis
    In an emergency, a prompt response can save the lives of victims. This statement generates an imperative issue in emergency medical services (EMS). Designing a system that brings simplicity in locating emergency scenes is a step towards improving response time. This paper therefore implemented and evaluated the performance of an SMS-based emergency geolocation notification system with emphasis on its SMS delivery time and the system's geolocation and dispatch time. Using the RAS metrics recommended by IEEE for evaluation, the designed system was found to be efficient and effective as its reliability stood within 62.7% to 70.0% while its availability stood at 99% with a downtime of 3.65 days/year.
  • Article
    Citation - Scopus: 1
    Optimizing the Stochastic Deployment of Small Base Stations in an Interleave Division Multiple Access-Based Heterogeneous Cellular Networks
    (Wiley, 2022) Noma-Osaghae, Etinosa; Misra, Sanjay; Koyuncu, Murat
    The use of small base stations (SBSs) to improve the throughput of cellular networks gave rise to the advent of heterogeneous cellular networks (HCNs). Still, the interleave division multiple access (IDMA) performance in sleep mode active HCNs has not been studied in the existing literature. This research examines the 24-h throughput, spectral efficiency (SE), and energy efficiency (EE) of an IDMA-based HCN and compares the result with orthogonal frequency division multiple access (OFDMA). An energy-spectral-efficiency (ESE) model of a two-tier HCN was developed. A weighted sum modified particle swarm optimization (PSO) algorithm simultaneously maximized the SE and EE of the IDMA-based HCN. The result obtained showed that the IDMA performs at least 68% better than the OFDMA on the throughput metric. The result also showed that the particle swarm optimization algorithm produced the Pareto optimal front at moderate traffic levels for all varied network parameters of SINR threshold, SBS density, and sleep mode technique. The IDMA-based HCN can improve the throughput, SE, and EE via sleep mode techniques. Still, the combination of network parameters that simultaneously maximize the SE and EE is interference limited. In sleep mode, the performance of the HCN is better if the SBSs can adapt to spatial and temporal variations in network traffic.
  • Article
    Citation - WoS: 18
    Citation - Scopus: 35
    Distributed Centrality Analysis of Social Network Data Using Mapreduce
    (Mdpi, 2019) Behera, Ranjan Kumar; Rath, Santanu Kumar; Misra, Sanjay; Damasevicius, Robertas; Maskeliunas, Rytis
    Analyzing the structure of a social network helps in gaining insights into interactions and relationships among users while revealing the patterns of their online behavior. Network centrality is a metric of importance of a network node in a network, which allows revealing the structural patterns and morphology of networks. We propose a distributed computing approach for the calculation of network centrality value for each user using the MapReduce approach in the Hadoop platform, which allows faster and more efficient computation as compared to the conventional implementation. A distributed approach is scalable and helps in efficient computations of large-scale datasets, such as social network data. The proposed approach improves the calculation performance of degree centrality by 39.8%, closeness centrality by 40.7% and eigenvalue centrality by 41.1% using a Twitter dataset.
  • Article
    Citation - WoS: 4
    Document Type Definition (dtd) Metrics
    (Editura Acad Romane, 2011) Basci, Dilek; Misra, Sanjay
    In this paper, we present two complexity metrics for the assessment of schema quality written in Document Type Definition (DTD) language. Both "Entropy (E) metric: E(DTD)" and "Distinct Structured Element Repetition Scale (DSERS) metric: DSERS(DTD)" are intended to measure the structural complexity of schemas in DTD language. These metrics exploit a directed graph representation of schema document and consider the complexity of schema due to its similar structured elements and the occurrences of these elements. The empirical and theoretical validations of these metrics prove the robustness of the metrics.
  • Article
    Citation - WoS: 12
    Citation - Scopus: 17
    Entropy as a Measure of Quality of Xml Schema Document
    (Zarka Private Univ, 2011) Basci, Dilek; Misra, Sanjay; Computer Engineering
    In this paper, a metric for the assessment of the structural complexity of eXtensible Markup Language schema document is formulated. The present metric 'Schema Entropy is based on entropy concept and intended to measure the complexity of the schema documents written in W3C XML Schema Language due to diversity in the structures of its elements. The SE is useful in evaluating the efficiency of the design of Schemas. A good design reduces the maintainability efforts. Therefore, our metric provides valuable information about the reliability and maintainability of systems. In this respect, this metric is believed to be a valuable contribution for improving the quality of XML-based systems. It is demonstrated with examples and validated empirically through actual test cases.
  • Article
    Citation - WoS: 17
    Citation - Scopus: 26
    Toward Ontology-Based Risk Management Framework for Software Projects: an Empirical Study
    (Wiley, 2020) Abioye, Temitope Elizabeth; Arogundade, Oluwasefunmi Tale; Misra, Sanjay; Akinwale, Adio T.; Adeniran, Olusola John
    Software risk management is a proactive decision-making practice with processes, methods, and tools for managing risks in a software project. Many existing techniques for software project risk management are textual documentation with varying perspectives that are nonreusable and cannot be shared. In this paper, a life-cycle approach to ontology-based risk management framework for software projects is presented. A dataset from literature, domain experts, and practitioners is used. The identified risks are refined by 19 software experts; risks are conceptualized, modeled, and developed using Protege. The risks are qualitatively analyzed and prioritized, and aversion methods are provided. The framework is adopted in real-life software projects. Precision recall and F-measure metrics are used to validate the performance of the extraction tool while performance and perception evaluation are carried out using the performance appraisal form and technology acceptance model, respectively. Mean scores from performance and perception evaluation are compared with evaluation concept scale. Results showed that cost is reduced, high-quality projects are delivered on time, and software developers found this framework a potent tool needed for their day-to-day activities in software development.
  • Article
    Citation - WoS: 7
    Citation - Scopus: 15
    Lossless Text Compression Technique Using Syllable Based Morphology
    (Zarka Private Univ, 2011) Akman, Ibrahim; Bayindir, Hakan; Ozleme, Serkan; Akin, Zehra; Misra, Sanjay; Computer Engineering
    In this paper, we present a new lossless text compression technique which utilizes syllable-based morphology of multi-syllabic languages. The proposed algorithm is designed to partition words into its syllables and then to produce their shorter bit representations for compression. The method has six main components namely source file, filtering unit, syllable unit, compression unit, dictionary file and target file. The number of bits in coding syllables depends on the number of entries in the dictionary file. The proposed algorithm is implemented and tested using 20 different texts of different lengths collected from different fields. The results indicated a compression of up to 43%.