5 results
Search Results
Now showing 1 - 5 of 5
Review Citation - WoS: 3Citation - Scopus: 6Machine Learning for Sustainable Reutilization of Waste Materials as Energy Sources - a Comprehensive Review(Taylor & Francis inc, 2024) Peng, Wei; Sadaghiani, Omid KarimiThis work reviews Machine Learning applications in the sustainable utilization of waste materials as energy source so that analysis of the past works exposed the lack of reviewing study. To solve it, the origin of waste biomass raw materials is explained, and the application of Machine Learning in this section is scrutinized. After analysis of numerous papers, it is concluded that Machine Learning and Deep Learning are widely utilized in waste biomass production areas to enhance the quality and quantity of production, improve the predictions, diminish the losses, as well as increase storage and transformation conditions. The positive effects and application with the utilized algorithms and other effective information are collected in this work for the first time. According to the statistical analysis, in 20% out of the studies conducted about the application of Machine Learning and Deep Learning in waste biomass raw materials, Artificial Neural Network (ANN) algorithm has been applied. Afterward, the Super Vector Machine (SVM) and Random Forest (RF) are the second and third most-utilized algorithms applied in 15% and 14% of studies. Meanwhile, 27% of studies focused on the applications of Machine Learning and Deep Learning in the Forest wastes.Conference Object Citation - WoS: 1An Undergraduate Curriculum for Deep Learning(Ieee, 2018) Tirkes, Guzin; Ekin, Cansu Cigdem; Sengul, Gokhan; Bostan, Atila; Karakaya, MuratDeep Learning (DL) is an interesting and rapidly developing field of research which has been currently utilized as a part of industry and in many disciplines to address a wide range of problems, from image classification, computer vision, video games, bioinformatics, and handwriting recognition to machine translation. The starting point of this study is the recognition of a big gap between the sector need of specialists in DL technology and the lack of sufficient education provided by the universities. Higher education institutions are the best environment to provide this expertise to the students. However, currently most universities do not provide specifically designed DL courses to their students. Thus, the main objective of this study is to design a novel curriculum including two courses to facilitate teaching and learning of DL topic. The proposed curriculum will enable students to solve real-world problems by applying DL approaches and gain necessary background to adapt their knowledge to more advanced, industry-specific fields.Article Citation - WoS: 2Citation - Scopus: 4Exploiting Visual Features in Financial Time Series Prediction(Igi Global, 2020) Karacor, Adil Gursel; Erkan, Turan ErmanThe possibility to enhance prediction accuracy for foreign exchange rates was investigated in two ways: first applying an outside the box approach to modeling price graphs by exploiting their visual properties, and secondly employing the most efficient methods to detect patterns to classify the direction of movement. The approach that exploits the visual properties of price graphs which make use of density regions along with high and low values describing the shape; hence, the authors propose the name 'Finance Vision.' The data used in the predictive model consists of 1-hour past price values of 4 different currency pairs, between 2003 and 2016. Prediction performances of state-of-the-art methods; Extreme Gradient Boosting, Artificial Neural Network and Support Vector Machines are compared over the same data with the same sets of features. Results show that density based visual features contribute considerably to prediction performance.Article Citation - WoS: 1Citation - Scopus: 2Applications of Artificial Intelligence as a Prognostic Tool in the Management of Acute Aortic Syndrome and Aneurysm: A Comprehensive Review(MDPI, 2025) Ayhan, Cagri; Mekhaeil, Marina; Channawi, Rita; Ozcan, Alp Eren; Akargul, Elif; Deger, Atakan; Soliman, OsamaAcute Aortic Syndromes (AAS) and Thoracic Aortic Aneurysm (TAA) remain among the most fatal cardiovascular emergencies, with mortality rising by the hour if diagnosis and treatment are delayed. Despite advances in imaging and surgical techniques, current clinical decision-making still relies heavily on population-based parameters such as maximum aortic diameter, which fail to capture the biological and biomechanical complexity underlying these conditions. In today's data-rich era, where vast clinical, imaging, and biomarker datasets are available, artificial intelligence (AI) has emerged as a powerful tool to process this complexity and enable precision risk prediction. To date, AI has been applied across multiple aspects of aortic disease management, with mortality prediction being the most widely investigated. Machine learning (ML) and deep learning (DL) models-particularly ensemble algorithms and biomarker-integrated approaches-have frequently outperformed traditional clinical tools such as EuroSCORE II and GERAADA. These models provide superior discrimination and interpretability, identifying key drivers of adverse outcomes. However, many studies remain limited by small sample sizes, single-center design, and lack of external validation, all of which constrain their generalizability. Despite these challenges, the consistently strong results highlight AI's growing potential to complement and enhance existing prognostic frameworks. Beyond mortality, AI has expanded the scope of analysis to the structural and biomechanical behavior of the aorta itself. Through integration of imaging, radiomic, and computational modeling data, AI now allows virtual representation of aortic mechanics-enabling prediction of aneurysm growth rate, remodeling after repair, and even rupture risk and location. Such models bridge data-driven learning with mechanistic understanding, creating an opportunity to simulate disease progression in a virtual environment. In addition to mortality and growth-related outcomes, morbidity prediction has become another area of rapid development. AI models have been used to assess a wide range of postoperative complications, including stroke, gastrointestinal bleeding, prolonged hospitalization, reintubation, and paraplegia-showing that predictive applications are limited only by clinical imagination. Among these, acute kidney injury (AKI) has received particular attention, with several robust studies demonstrating high accuracy in early identification of patients at risk for severe renal complications. To translate these promising results into real-world clinical use, future work must focus on large multicenter collaborations, external validation, and adherence to transparent reporting standards such as TRIPOD-AI. Integration of explainable AI frameworks and dynamic, patient-specific modeling-potentially through the development of digital twins-will be essential for achieving real-time clinical applicability. Ultimately, AI holds the potential not only to refine risk prediction but to fundamentally transform how we understand, monitor, and manage patients with AAS and TAA.Article Performance Investigation of ML Algorithms for Potato Blight Classification: The Role of Hyperparameter Tuning(Springer, 2026) Saeed, Sadia; Rehman, Hafiz Zia Ur; Hussain, Muhammad Ureed; Khan, Muhammad Umer; Saeed, Muhammad TallalPotato is the world's fourth most important food crop, consumed by over a billion people. Early and late blight diseases can reduce yields by up to 40%, leading to severe economic and food security challenges. While manual detection methods are prone to error, automated, image-based machine learning (ML) offers a promising alternative, though its performance depends strongly on proper optimization. This study investigates the role of hyperparameter tuning in improving ML algorithms for potato blight classification. We utilized two datasets: the PlantVillage dataset (500 images per class) and a region-specific Potato Leaf Dataset (PLD) from Pakistan (1628 early blight, 1424 late blight, 1020 healthy). All images were resized to 256 & times; 256 pixels and augmented. Features were extracted using the Bag-of-Features (BoF) technique, and four classic ML models-Support Vector Machine (SVM), k-Nearest Neighbors (kNN), Linear Discriminant Analysis (LDA), and Random Forest (RF)-were trained. Hyperparameters were optimized via grid search with 5-fold cross-validation. This tuning led to measurable improvements; for instance, SVM accuracy increased from 93.0% to 95.9% on PlantVillage and from 85.0% to 87.0% on PLD. Evaluation using precision, recall, F1-score, and specificity confirmed SVM as the best-performing model. A confusion matrix analysis revealed that most misclassifications occurred between the two blight types due to visual similarity. To translate our findings into practice, we developed a MATLAB Graphical User Interface (GUI) that enables farmers to classify a leaf image in under three seconds and receive precautionary recommendations. This study demonstrates that systematic hyperparameter optimization is crucial for maximizing ML performance and is a key step in building accessible, real-time tools for precision agriculture. Future work will focus on extending the system to mobile and web platforms.

