Garousi, Vahid

Loading...
Profile Picture
Name Variants
Garousi-Yusifoglu, Vahid
G.,Vahid
G., Vahid
Garousi, Vahid
V.,Garousi
V., Garousi
Garousi,V.
Vahid, Garousi
Yusifoglu, Vahid Garousi
Job Title
Doçent Doktor
Email Address
Main Affiliation
Software Engineering
Status
Former Staff
Website
ORCID ID
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID

Sustainable Development Goals

SDG data is not available
This researcher does not have a Scopus ID.
This researcher does not have a WoS ID.
Scholarly Output

13

Articles

9

Views / Downloads

1/0

Supervised MSc Theses

0

Supervised PhD Theses

0

WoS Citation Count

611

Scopus Citation Count

788

WoS h-index

10

Scopus h-index

11

Patents

0

Projects

0

WoS Citations per Publication

47.00

Scopus Citations per Publication

60.62

Open Access Source

3

Supervised Theses

0

Google Analytics Visitor Traffic

JournalCount
Journal of Systems and Software4
Information and Software Technology3
ACM International Conference Proceeding Series -- 2014 International Conference on Software and Systems Process, ICSSP 2014 -- 26 May 2014 through 28 May 2014 -- Nanjing -- 1056081
CEUR Workshop Proceedings -- 9th Turkish National Software Engineering Symposium, UYMS 2015 -- 9 September 2015 through 11 September 2015 -- Izmir -- 1176651
17th International Conference on Evaluation and Assessment in software Engineering -- APR 14-16, 2013 -- Porto de Galinhas, BRAZIL1
Current Page: 1 / 2

Scopus Quartile Distribution

Competency Cloud

GCRIS Competency Cloud

Scholarly Output Search Results

Now showing 1 - 3 of 3
  • Conference Object
    Citation - Scopus: 27
    When To Automate Software Testing? Decision Support Based on System Dynamics: an Industrial Case Study
    (Association for Computing Machinery, 2014) Sahaf,Z.; Garousi,V.; Pfahl,D.; Irving,R.; Amannejad,Y.
    Software test processes are complex and costly. To reduce testing effort without compromising effectiveness and product quality, automation of test activities has been adopted as a popular approach in software industry. However, since test automation usually requires substantial upfront investments, automation is not always more cost-effective than manual testing. To support decision-makers in finding the optimal degree of test automation in a given project, we propose in this paper a simulation model using the System Dynamics (SD) modeling technique. With the help of the simulation model, we can evaluate the performance of test processes with varying degrees of automation of test activities and help testers choose the most optimal cases. As the case study, we describe how we used our simulation model in the context of an Action Research (AR) study conducted in collaboration with a software company in Calgary, Canada. The goal of the study was to investigate how the simulation model can help decision-makers decide whether and to what degree the company should automate their test processes. As a first step, we compared the performances of the current fully manual testing with several cases of partly automated testing as anticipated for implementation in the partner company. The development of the simulation model as well as the analysis of simulation results helped the partner company to get a deeper understanding of the strengths and weaknesses of their current test process and supported decision-makers in the cost effective planning of improvements of selected test activities. © 2014 ACM.
  • Review
    Citation - WoS: 67
    Citation - Scopus: 81
    Software Test Maturity Assessment and Test Process Improvement: a Multivocal Literature Review
    (Elsevier, 2017) Garousi, Vahid; Felderer, Michael; Hacaloglu, Tuna
    Context: Software testing practices and processes in many companies are far from being mature and are usually conducted in ad-hoc fashions. Such immature practices lead to various negative outcomes, e.g., ineffectiveness of testing practices in detecting all the defects, and cost and schedule overruns of testing activities. To conduct test maturity assessment (TMA) and test process improvement (TPI) in a systematic manner, various TMA/TPI models and approaches have been proposed. Objective: It is important to identify the state-of-the-art and the-practice in this area to consolidate the list of all various test maturity models proposed by practitioners and researchers, the drivers of TMA/TPI, the associated challenges and the benefits and results of TMA/TPI. Our article aims to benefit the readers (both practitioners and researchers) by providing the most comprehensive survey of the area, to this date, in assessing and improving the maturity of test processes. Method: To achieve the above objective, we have performed a Multivocal Literature Review (MLR) study to find out what we know about TMA/TPI. A MLR is a form of a Systematic Literature Review (SLR) which includes the grey literature (e.g., blog posts and white papers) in addition to the published (formal) literature (e.g., journal and conference papers). We searched the academic literature using the Google Scholar and the grey literature using the regular Google search engine. Results: Our MLR and its results are based on 181 sources, 51 (29%) of which were grey literature and 130 (71%) were formally published sources. By summarizing what we know about TMA/TPI, our review identified 58 different test maturity models and a large number of sources with varying degrees of empirical evidence on this topic. We also conducted qualitative analysis (coding) to synthesize the drivers, challenges and benefits of TMA/TPI from the primary sources. Conclusion: We show that current maturity models and techniques in TMA/TPI provides reasonable advice for industry and the research community. We suggest directions for follow-up work, e.g., using the findings of this MLR in industry-academia collaborative projects and empirical evaluation of models and techniques in the area of TMA/TPI as reported in this article. (C) 2017 Elsevier B.V. All rights reserved.
  • Article
    Citation - WoS: 116
    Citation - Scopus: 144
    Smells in Software Test Code: a Survey of Knowledge in Industry and Academia
    (Elsevier Science inc, 2018) Garousi, Vahid; Kucuk, Baris
    As a type of anti-pattern, test smells are defined as poorly designed tests and their presence may negatively affect the quality of test suites and production code. Test smells are the subject of active discussions among practitioners and researchers, and various guidelines to handle smells are constantly offered for smell prevention, smell detection, and smell correction. Since there is a vast grey literature as well as a large body of research studies in this domain, it is not practical for practitioners and researchers to locate and synthesize such a large literature. Motivated by the above need and to find out what we, as the community, know about smells in test code, we conducted a 'multivocal' literature mapping (classification) on both the scientific literature and also practitioners' grey literature. By surveying all the sources on test smells in both industry (120 sources) and academia (46 sources), 166 sources in total, our review presents the largest catalogue of test smells, along with the summary of guidelines/techniques and the tools to deal with those smells. This article aims to benefit the readers (both practitioners and researchers) by serving as an "index" to the vast body of knowledge in this important area, and by helping them develop high-quality test scripts, and minimize occurrences of test smells and their negative consequences in large test automation projects. (C) 2017 Elsevier Inc. All rights reserved.